Debugging Browsers – Tools and Techniques

Last update: March 29, 2021

Earlier this year, I shared a post on how you can become an expert on web browsers from the comfort of your desk… or anywhere else you have an internet connection. In that post, I mostly covered how to search through the source, review issue reports, and find design documentation. I also provided a long list of browser experts you might consider following on Twitter.

In today’s post, I’d like to give a quick summary of some of the tools and techniques I use for diagnosing browser problems.

The Importance of Observation

Specs lie. Code misleads. Everything changes over time. Observation reveals what’s actually going on– not what the PM designed, or the Dev intended. If you want to know how something is going to behave, just try it!

This image has an empty alt attribute; its file name is image-16.png

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. -various

In many cases, the fastest route to troubleshoot problems is to observe exactly what is happening on the network, disk, or screen and only then start looking at code and specs to figure out why.

Built-in Tools

The F12 Developer Tools (just hit F12) are tremendously useful for determining why a given website behaves in a certain way. In many cases, the DevTools Console will flag an observed problem with a helpful error message. I don’t know of a great tutorial, but there are likely some on YouTube. One non-obvious feature of the DevTools is that you can use a Desktop browser’s DevTools to remote debug a browser running on a mobile device.

Chromium’s NetLogs (see chrome://net-export ) contain tons of detailed information about almost every aspect of networking, as well as other useful diagnostic data (the user’s enabled extensions, field trial experimental settings, etc). You can analyze NetLogs using a variety of free tools.

Chromium Tracing (see chrome://tracing ) allows you to diagnose performance issues in Chromium-based browsers using extremely in-depth tracelog data. Analysis of these logs isn’t for the faint-of-heart.

Chromium Logging (using the --enable-logging command line argument) is useful in diagnosing a number of internal issues in Chromium subsystems. Collecting and analyzing these logs is non-trivial, but is sometimes the fastest way to root-cause tricky problems. See the following resources:

Chromium includes over 20 Internals Pages that allow you to view detailed information about media playback, data sync, and other features. Visit chrome://chrome-urls and search for -internals to see the list.

Browser Extensions

The VisBug Chrome Extension – Easily manipulate any page layout, directly in your browser.

postMessage Debugger – This extension prints messages sent with postMessage to the console.

Extension source viewer – View the source of browser extensions directly from the Web Store listing.

Cross-Platform Tools

WireShark allows packet-level analysis of network traffic. This can be useful in rare cases where a network bug depends on the exact packet size and timing.

The Fiddler Web Debugger allows import of NetLog and HAR traffic captures, and enables losslessly capturing requests and responses from any browser. While Fiddler Classic is (effectively) Windows-only, Fiddler Everywhere runs on Windows, Mac, and Linux.

Windows Tools

If you need to watch file or registry key creation/read/write/deletion, or thread/process creations and exits, then Sysinternals Process Monitor has got your back. For instance, this helped us easily root cause a bug where launching a Chromium-based browser would delete a file owned by Chrome.

If you want to explore information about process sandboxing, startup parameters, Job limits, etc, then Sysinternals Process Explorer is the tool to use. For instance, this helped us track down a problem where a browser window was unexpectedly appearing. The user simply closed all browser instances, then waited for an unexpected browser to appear. Then, they looked at the process tree to see what application started it. For instance, in this case, Edge was launched by sr.exe:

If you need to debug a scenario involving drag/drop or copy/paste, you can use ClipSpy (binary only) or NirSoft InsideClipboard.

Bisecting

Bisecting is the process of making a repeated set of observations to determine the build in which a problem appeared (or disappeared). From there, you can easily assign bugs to the right owners for rapid fixes.

See the Bisect Regressions section of this post for details on how to use Chromium’s bisect-builds.py script (which does not require you to build Chromium or download all of its tools and source code) to bisect problems. Here’s another bisection case-study.

I’m sure there are a hundred great tools I’ve omitted. This post will grow over time. If you’ve got a suggestion for a great diagnostic tool, share with us!

-Eric

Local Data Encryption in Chromium

Back in February, I wrote about browser password managers and mentioned that it’s important to understand the threat model when deciding how to implement features and their security protections.

Generally speaking, “keeping secrets from yourself” is a fool’s errand, so it’s a waste of time and effort to encrypt data if you have to store the decryption key in a place that’s accessible to the attacker. That’s one reason why physically local attacks and machines infected with malware are generally outside the browser’s threat model: if an attacker has access to the keys, using encryption isn’t going to protect your data.

Web browsers store a variety of highly sensitive data, including credit card numbers, passwords and cookies (often containing authentication tokens functionally equivalent to passwords). When storing this extra-sensitive data, Chromium encrypts it using AES256, storing the encryption key in an OS storage area. This feature is called local data encryption. Not all of the browser’s data stores use encryption– for instance, the browser cache does not. If your device is at risk of theft, you should be using your OS’ full-disk encryption feature, e.g. BitLocker on Windows.

The profile’s encryption key is protected by OSCrypt: On Windows, the OS Storage area is DPAPI; on Mac, it’s the Keychain; on Linux, it’s Gnome Keyring or KWallet.

Notably, all of these storage areas encrypt the AES256 key using a key accessible to (some or all1) processes running as the user– this means that if your PC is infected with malware, the bad guys can get decrypted access to the browser’s storage areas.

However, that’s not to say that Local Data Encryption is entirely without value– for instance, I recently came across a misconfigured web server that allowed any visitor to explore the server owner’s profile (e.g. c:\users\sally), including their Chrome profile folder. Because the browser key in the profile is encrypted using a key stored outside of the Chrome profile, their most sensitive data remained encrypted.

Similarly, if a laptop isn’t protected with Full Disk Encryption, Local Data Encryption will make a thief’s life harder.

Tradeoffs?

Okay, so Local Data Encryption might be useful. What are the downsides?

The obvious tradeoff is simple and mild: There’s always a performance cost to encrypting and decrypting data. However, AES256 is extremely fast (>1GB/sec) on modern hardware, and the data size of cookies and credentials is relatively small.

The bigger risk is complexity: If something goes wrong with either of the keys (the browser’s key that encrypts the data or the OS’s key that encrypts the browser’s key), then the user’s cookie and credential data will be unrecoverable. The user will be forced to re-log into every website and re-store all of the credentials in their password manager (or recover their credentials from the cloud using the browser’s sync feature).

Unfortunately, I seem to be a magnet for such problems.

On Mac, Edge recently had a problem where the browser would fail to get the browser key from the OS keychain. The browser would offer to wipe the keychain (losing all of your data), but ignoring the error message and restarting would typically correct the problem. A fix for that bug was recently issued.

On Windows, DPAPI failures are typically silent– your data disappears with nary a message box.

When I first rejoined Microsoft in 2018, a bug in AAD meant that my OS DPAPI key was corrupted, causing Chromium-based browsers to cause lsass to spin a CPU core forever when they launched. Troubleshooting this problem required months of effort.

More recently, we’ve heard from some users on Windows 10 that Edge and Chrome forget their data frequently (and similar effects are seen in other DPAPI-using applications).

Users in this state who visit chrome://histograms/OSCrypt in Chrome or Edge in the browser session where they first notice their sensitive data has gone missing will see an entry inside OSCrypt.Win.KeyDecryptionError with a value of -2146893813 (NTE_BAD_KEY_STATE), indicating that the OS API was unable to use the currently logged-in user’s credentials to decrypt the browser’s encryption key:

Fortunately (for us, not for him), this problem hit one of the best engineers in the world, and he was able to develop a solid theory of the root cause of the problem. If you find your system in this state, try running the following command in PowerShell:

Get-ScheduledTask | foreach { If (([xml](Export-ScheduledTask -TaskName $_.TaskName -TaskPath $_.TaskPath)).GetElementsByTagName("LogonType").'#text' -eq "S4U") { $_.TaskName } }

This will list off any scheduled tasks using the S4U feature suspected of causing the incorrect DPAPI credentials:

Update: The S4U bug was fixed for Windows 10 2004 and 20H2 as a part of the February 2001 Windows Updates.

-Eric

1 The question of which processes can ask the OS to decrypt the browser’s key is a somewhat interesting one. On Windows, Chromium’s use of DPAPI’s CryptProtectData allows any process running as the user to make the request; there’s no attempt to use additional entropy to do “better” encryption, largely because there’s nowhere safe to store that additional entropy. On modern Windows, there are some other mechanisms that might provide somewhat more isolation than raw CryptProtectData, but full-trust malware is always going to be able to find a way to get at the data.

On Mac, the Keychain protection restricts access to data such that it’s not accessible to every process running as the user, but this doesn’t mean the data is immune from malware. Malware must instead use Chrome as a sock-puppet, having it perform all of the data decryption tasks, driving it via extensibility interfaces or other mechanisms.

The overall threat model against local attackers is further complicated by the mechanisms and constraints of process isolation: for instance, if an Admin process and dump the memory of a user-level process, or inject threads into that process, malware can also steal the data after the browser has decrypted it.

Mobile platforms (iOS/Android) tend to have the strongest story here, with more robust process isolation, code-signing requirements, hardware-backed secure enclaves, etc.

Web Debugging: Watching Element Changes

Recently, I was debugging a regression where I wanted to watch change’s in an element’s property at runtime. Specifically, I wanted to watch the URL change when I select different colors in Tesla’s customizer. By using the Inspect Element tool, I can find the relevant image in the tree, and then when I pick a different color in the page, the Developer Tools briefly highlight the changes to the image’s attributes:

Unfortunately, you might notice that the value in the xlink:href property contains a ... in the middle of it, making it difficult to see what’s changed. I noted that the context menu offers a handy “Break on” submenu to break execution whenever the node changes:

…but I lamented that there’s no Watch attribute command to log the changing URLs to the console. Mozillian April King offered a helpful snippet that provides this functionality.

After selecting the image (which points Console variable $0 at the element), type the following in the Console:

new MutationObserver(i => console.log(i[0].target.attributes['xlink:href'])).observe($0,
{ attributes: ['xlink:href']});

This simple snippet creates a MutationObserver to watch the selected element’s xlink:href attribute, and every time it changes, the callback writes the current attribute value to the console:

Cool, huh?

Thanks, April!

-Eric

Browser Memory Limits

Web browsers are notorious for being memory hogs, but this can be a bit misleading– in most cases, the memory used by the loaded pages accounts for the majority of memory consumption.

Unfortunately, some pages are not very good stewards of the system’s memory. One particularly common problem is memory leaks– a site establishes a fetch() connection to retrieve data from an endless stream of data coming from some webservice, then subsequently tries to hold onto the ever-growing response data forever.

Sandbox Limits

In Chromium-based browsers on Windows1 and Linux, a sandboxed 64-bit process’ memory consumption is bounded by a limit on the Windows Job object holding the process. For the renderer processes that load pages and run JavaScript, the limit was 4gb back in 2017 but now it can be as high as 16gb :

int64_t physical_memory = base::SysInfo::AmountOfPhysicalMemory();
    ...
    if (physical_memory > 16 * GB) {
      memory_limit = 16 * GB;
    } else if (physical_memory > 8 * GB) {
      memory_limit = 8 * GB;
    }

If the tab crashes and the error page shows SBOX_FATAL_MEMORY_EXCEEDED, it means that the tab used more memory than permitted for the sandboxed process2.

The sandbox limits are so high that exceeding them is almost always an indication of a memory leak or JavaScript error on the part of the site.

Running out of memory

Beyond hitting the sandbox limits, a process can simply run out of memory– if it asks for memory from the OS and the OS says “Sorry, nope“, the process will typically crash.

If the tab crashes, the error code will be rendered in the page:

Or, that’s what happens in the ideal case, at least.

If your system is truly out of free memory, all sorts of things are likely to fail– random processes around the system will likely fall over, and the critical top-level Browser process itself might crash, blowing away all of your tabs.

In my case, the crash reporter itself crashes, leading to this unfriendly dialog:

To make these sorts of catastrophic crashes less likely, allow Windows to manage the size of your page file.

Turning off OS page file as I had in the screenshot above means that when your last block of physical memory is exhausted, rather than slowing down, random processes on your system will fall over.

32bit Processes and Fragmentation

Notably, no sandbox limit is set for a 32bit browser instance; on 32-bit Windows, a 32bit process can almost always only allocate 2gb (std::numeric_limits::max() == 2147483647) before crashing with an OOM. For a 32-bit process running on 64bit Windows, a process compiled as LargeAddressAware (like Chromium) can allocate up to 4GB.

32-bit processes also often encounter another problem– even if you haven’t reached the 2gb process limit, it’s often hard to allocate more than a few hundred megabytes of contiguous memory because of address space fragmentation. If you encounter an “Out of Memory” error in a process that doesn’t seem to be using very much memory, visit chrome://version to ensure that you aren’t using a 32 bit browser.

Test Page and Tooling

I’ve built a Memory Use test page that allows you to use gobs of memory to see how your browser (and Operating System) reacts. Note that memory accounting is complicated and sneaky: ArrayBuffers aren’t considered JavaScript memory, and on Mac Chromium, they’re not backed by “real memory” until used.

You can use the Browser Task Manager (hit Shift+Esc on Windows or use Window > Task Manager on Mac) to see how much memory your tabs and browser extensions are using:

You can also use the Memory tab in the F12 Developer tools to peek at heap memory usage. Click the Take Snapshot button to get a peek at where memory is being used (and potentially wasted):

Using lots of memory isn’t necessarily bad– memory not being used is memory that’s going to waste. But you should always ensure that your web application isn’t holding onto data that it will never need again.

Memory: Use it, but don’t abuse it.

-Eric

1 Due to platform limitations, Chromium on OS X does not limit the sandbox size.

2 The error code isn’t fully reliable; Chrome’s test code notes:

// On 64-bit, the Job object should terminate the renderer on an OOM.
// However, if the system is low on memory already, then the allocator
// might just return a normal OOM before hitting the Job limit.

Alex Gough from Chromium provided the following breakdown of other memory-related limits for Windows/Linux:

Web-to-App Communication: The Native Messaging API

One of the most powerful mechanisms for Web-to-App communication is to use an extension that utilizes the NativeMessaging API. The NativeMessaging API allows an extension running inside the browser to exchange messages with a native-code “Host” executable running outside of the browser sandbox. That Host executable runs with the full privileges of the current user account, meaning that it can show UI, make network connections, read/write to any files to which the user has access, call privileged APIs, etc.

The NativeMessaging approach requires installing both a native executable and a browser extension (e.g. from the Chrome or Edge Web Store). Web pages cannot themselves communicate directly with a NativeMessaging host, they must use message passing APIs to communicate from the web page to the Extension, which then uses NativeMessaging to communicate with the executable running outside of the browser. This restriction adds implementation complexity, but is considerably safer than historical approaches like ActiveX controls with elevated brokers.

Implementing the Host Executable

The browser launches1 the Host executable in response to requests from an extension. On Windows, two command-line arguments are passed to the executable: the origin of the extension, and the browser’s HWND.

From the native code executable’s point-of-view, messages are received and sent using simple standard I/O streams. Messages are serialized using UTF8-encoded JSON preceded by a 32bit unsigned length in native byte order. Messages to the Host are capped at 4GB, and responses from the Host returned to the extension are capped at 1MB.

Hosts can be implemented using pretty much any language that supports standard I/O streams; using a cross-platform language like Go or Rust is probably a good choice if you aim to run on Windows, Mac, and Linux.

Avoid A Footgun
Be sure to set your streams to binary mode, or you might miscompute the data length prefix if the data contains CR/LF characters, causing Chromium to think your message was malformed.

_setmode(_fileno(stdin), _O_BINARY);
_setmode(_fileno(stdout), _O_BINARY);

Open-source examples of Native Messaging hosts abound; you can find some on GitHub, e.g. by searching for allowed_origins. For instance, here’s a simple one written in C#.

Registering the Host

The Native Messaging Host (typically installed by a downloaded executable or MSI installer) describes itself using a JSON manifest file that specifies the Extensions allowed to invoke it. For instance, say I wanted to add a NativeMessaging host that would allow my browser extension to file a bug in a local Microsoft Access database. The registration for the extension might look like this:

{
  "name": "com.bayden.moarTLSNative",
  "description": "MoarTLS Bug Filer",
  "path": "C:\\Program Files\\MoarTLSBugFiler\\native_messaging_host_for_bug_filing.exe",
  "type": "stdio",
  "allowed_origins": ["chrome-extension://emojohianibcocnaiionilkabjlppkjc/"]
}

On Windows, the host’s manifest is referenced in the registry, inside \Software\Microsoft\Edge\NativeMessagingHosts\ under either the HKLM or HKCU hive. By default, a reference in HKCU overrides a HKLM reference. For compatibility reasons (enabling Chrome Web Store extensions to work with Edge), Microsoft Edge will also check for NativeMessagingHosts registered within the Google Chrome registry key or file path:

On other platforms, the manifest is placed in a well-known path.

User-Level vs. System-Level Registration

Writing to HKLM (a so-called “System Level install”) requires that the installer run with Administrator permissions, so many extensions prefer to register within HKCU so that Admin permissions are not required for installation. However, there are two downsides to “User Level” registration:

  1. It requires every user account on a shared system to run the installer
  2. It does not work for some Enterprise configurations.

The latter requires some explanation. The Microsoft Edge team (and various other external organizations) publish “Security Baseline” documents that give Enterprises and other organizations advice about best practices for securely deploying web browsers.

One element in the Microsoft Edge team’s baseline recommends that enterprises policy-disable support for “user-level” Native Messaging host executables. This policy directive helps ensure that native code executables that run outside of the browser sandbox were properly vetted and installed by the organization (and have not been installed by a rogue end-user, for instance). The specific mechanism of enforcement is that a browser with this policy set will refuse to load a NativeMessagingHost unless it is registered in the HKLM hive of the registry; HKCU-registered hosts are simply ignored.

In order for Enterprises to deploy browser extensions that utilize NativeMessaging with NativeMessagingUserLevelHosts policy-disabled, such extension installers must offer the option to register the messaging host in HKLM. Those System-level installers will then require Admin-elevation to run, so it’s probably worthwhile to offer either two installers (one for User-level installs and one for System-level installs) or a single installer that elevates to install to HKLM if requested.

Calling the NativeMessaging Host

From the JavaScript extension platform point-of-view, messages are sent using a simple postMessage API.

To communicate with a Native Messaging host, the extension must include the nativeMessaging permission in its manifest. After doing so, it can send a message to the Host like so:

var port = chrome.runtime.connectNative('com.bayden.moarTLSNative');
port.postMessage({ url: activeTabs[0].url });

When the connectNative call executes, the browser launches the native_messaging_host_for_bug_filer.exe executable referenced in the manifest. The subsequent postMessage call results in writing the message data to the process’ stdin I/O stream. If the process responds, port‘s onMessage handler fires, or if the process disconnects, the onDisconnect handler is invoked.


NativeMessaging is a remarkably powerful primitive for bi-directional communication with native apps. Please use it carefully– escaping the browser’s sandbox means that careless implementations might result in serious security vulnerabilities.

-Eric

1 As of Chromium 87, the way the executable is invoked on Windows is rather convoluted (cmd.exe is used as a proxy) and it may fail for some users. Avoid using any interesting characters (e.g. & and @) in the path to your Host executable.

Font Smoothing in Edge

Text rendering quality is an amazingly complicated topic, with hardware, settings, fonts, differing rendering engine philosophies, and user preferences all playing key roles. In some cases, however, almost everyone can agree that one rendering is superior to another. Consider, for instance, the text of this Gizmodo article as seen on one user’s computer:

You can use this fancy swipe-view widget to wipe between the renderings of the full paragraph:

Most people think the text for Edge looks awful, with unexpectedly chunky letters and irregular kerning, but the text for IE11 looks pretty good.

Investigation reveals that the problem here is that Edge and Firefox are respecting the system’s font smoothing setting, but IE11 is ignoring it.

Font Smoothing in Windows

Windows has three levels of font smoothing: Off, Basic/Standard, and ClearType. Here’s a quick chart showing the impact of each setting across three browser engines:

Notably, the IE11 rendering is pixel-for-pixel identical regardless of Windows settings– it renders with grayscale subpixel smoothing even when smoothing is off or ClearType is enabled. In contrast, if you zoom into the ClearType examples in Edge 86 and Firefox 80 you can see subpixel smoothing at work, with tiny colored fringes smoothing the edges of the characters.

Examining Smoothing Parameters

Font Smoothing is controlled by four registry values inside HKCU\Control Panel\Desktop. FontSmoothing supports two values {0=Off,2=On} and FontSmoothingType supports values {1=Basic,2=ClearType}. The FontSmoothingGamma parameter controls the darkness of the smoothing and accepts values between 1000 and 2200. You have to zoom in pretty close to see the effect:

The FontSmoothingOrientation flag controls the order of the red, blue, and green pixels in the display; it supports two values {0=BGR, 1=RGB}; ClearType needs this information to understand which subpixels to illuminate when smoothing. RGB is the most common and the default.

Applications that need this information should use the SystemParametersInfo function to retrieve these parameters.

Tuning Parameters

End-users can enable FontSmoothing in the Windows Performance Options (Win+R, then SystemPropertiesPerformance.exe):


To enable ClearType and tune its settings for your displays and settings, run the ClearType Tuner Wizard (Win+R, then CTTune.exe):

The Tuner will walk you through a series of side-by-side text renderings, asking which of them looks best, a bit like an eye doctor determining the parameters for your prescription eyeglasses.

Note: If you use a Windows PC via a remote desktop connection, ensure that the “Font Smoothing” option is checked in the connection properties:

…and that font smoothing wasn’t disabled via policy on the remote server.

Checking Edge Status

You can determine what font smoothing method is presently used in Edge by visiting the URL edge://histograms/Microsoft.Fonts.FontSmoothingMethod

The histogram will show a datapoint for the current state, where

Other Culprits?

Windows settings do not account for all cases of text rendering dissatisfaction.

In some cases, the problem is that a website has selected a font not present on the user’s PC, forcing fallback to an inferior font lacking proper hinting data for smoothing. Within Chromium itself, the browser changes the default fixed width font from Courier New to Consolas if ClearType is enabled, because the latter has better hinting information. Similarly, in Edge 85, we improved font fallback for Chinese to prefer the (ClearType-optimized) Microsoft YaHei and Microsoft JhengHei fonts over legacy fonts.

In other cases, users may simply prefer darker text than ClearType generates, perhaps using a browser extension to achieve their preferences.

In other cases, the user’s hardware might not be optimally configured for font smoothing. For instance, if you run a monitor in Portrait mode, its pixels have a different layout. A device can report its subpixel geometry using a registry key.

If you see a case of poor text rendering across browsers that you cannot explain using the information in this post, please let us know about it!

-Eric

Managing Edge via Policy

The new Microsoft Edge offers a rich set of policies that enable IT administrators to control many aspects of its operation.

You can visit edge://policy/ to see the policies in effect in your current browser:

Clicking on a policy name will take you to the documentation for that policy. The Status column indicates whether the policy is in effect, in Error, or Ignored. A policy is in Error if the policy name is unrecognized or the policy value is malformed. A policy is Ignored if the policy is a Protected Policy and the machine is not Domain Joined or MDM managed. Policies are marked “Protected” if they are especially often abused by malware. For instance, policies controlling the content of the New Tab Page are protected because adware/malware commonly attempted to monetize users by silently changing their search engine and homepage when their “free” apps were installed on a user’s PC. Protected Policies are marked in the Edge documentation with the note:

This policy is available only on Windows instances that are joined to a Microsoft Active Directory domain, Windows 10 Pro or Enterprise instances that enrolled for device management, or macOS instances that are that are managed via MDM or joined to a domain via MCX.

When Edge detects that a device is in a managed state in which Protected Policies are allowed, it will show “Managed by your organization” at the bottom of the … menu

Implementation mechanism

There’s no magic in how policies are implemented: while you should prefer using edge://policy to look at policies to get Edge’s own perspective about what policies are set, you can also view (and set) policies using the Windows Registry:

Careful, this thing is loaded…

You must take great care when configuring policies, as they are deliberately much more powerful than the options exposed to end-users. In particular, it is possible to set policies that will render the browser and the device it runs on vulnerable to attack from malicious websites.

Administrators should take great care when relaxing security restrictions through policy to avoid opening clients up to attack. For instance, avoid using entries like https://* in URLList permission controls– while such a rule may cover all of your Intranet Zone sites, it also includes any malicious site on the Internet using HTTPS.

… but Incomplete

Notably, not all settings in the browser can be controlled via policy. For instance, some of the web platform feature settings inside edge://settings/content can only be enabled/disabled entirely (instead of on a per-site basis), or may not be controllable at all.

In some cases, you may only be able to use a Master Preferences file to control the initial value for a setting, but the user may later change that value freely.

There’s a ton of great content about managing Edge in the Microsoft Edge Enterprise Documentation, including tables mapping Chrome and Edge Legacy policies to their Edge equivalents.

-Eric

Seamless Single Sign-On

There are many different authentication primitives built into browsers. The most common include Web Forms authentication, HTTP authentication, client certificate authentication, and the new WebAuthN standard. Numerous different authentication frameworks build atop these, and many enterprise websites support more than one scheme.

Each of the underlying authentication primitives has different characteristics: client certificate authentication is the most secure but is hard to broadly deploy, HTTP authentication works great for Intranets but poorly for most other scenarios, and Web Forms authentication gives the website the most UI flexibility but suffers from phishing risk and other problems. WebAuthN is the newest standard and is not yet supported by most sites.

Real World Authentication Flows

Many Enterprises will combine all of these schemes, using a flow something like:

  1. User navigates to https://app.corp.example
  2. The web application determines that the user is not logged in
  3. The user is redirected to https://login.corp.example
  4. The login provider checks to see whether the user has any cached authentication tokens, e.g. using the cookies accessible to the login provider.
  5. If not, the login provider tries to fetch https://clientcert.corp.example.com with a certificate filter specifying the internal CA root. If the user has a client certificate from the CA root, it is sent either silently or after a one-click prompt.
  6. If not, the login provider checks to see whether the user’s browser has any domain credentials using the HTTP Negotiate authentication scheme.
  7. If not, the login provider shows a traditional HTML form for login, ideally with a WebAuthN option that allows the user to use the new secure API rather than typing a password.
  8. After all of these steps, the user’s identity has been verified and is returned to the app.corp.example site.

In today’s post, I want to take a closer look at Step #6.

Silent HTTP Authentication

Unfortunately for our scenario, the HTTP Authentication scheme doesn’t support any sort of NoUI attribute, meaning that a server has no way to demand “Authenticate using the user’s domain credentials if and only if you can do so without prompting.”

WWW-Authenticate: Negotiate

And browsers’ HTTP Authentication prompts tend to be pretty ugly:

Depending upon client configuration and privacy mode, HTTP Authentication using the Negotiate (wrapping Kerberos/NTLM) or NTLM schemes may happen silently, or it may trigger the manual HTTP Authentication prompt.

So, at step #6, we’re stuck. If automatic HTTP authentication would’ve worked, it would be great– the user would be signed into the application with zero clicks and everything would be convenient and secure.

Load-Bearing Quirks

Fortunately for our scenario (unfortunately for understandability), there’s a magic trick that authentication flows can use to try HTTP authentication silently. As far as I can tell, it was never designed for this purpose, but it’s now used extensively.

To help prevent phishing attacks, modern browsers will prevent1 an HTTP authentication prompt from appearing if the HTTP/401 authentication response was for a cross-site image resource. The reasoning here is that many public platforms will embed images from arbitrary URLs, and an attacker might successfully phish users by posting on a message board an image reference that demands authentication. An unwary user might inadvertently supply their credentials for the message board to the third party site.

As noted in Chromium:

  if (resource_type == blink::mojom::ResourceType::kImage &&
      IsBannedCrossSiteAuth(request.get(), passed_extra_data.get())) {
    // Prevent third-party image content from prompting for login, as this
    // is often a scam to extract credentials for another domain from the
    // user. Only block image loads, as the attack applies largely to the
    // "src" property of the <img> tag. It is common for web properties to
    // allow untrusted values for <img src>; this is considered a fair thing
    // for an HTML sanitizer to do. Conversely, any HTML sanitizer that didn't
    // filter sources for <script>, <link>, <embed>, <object>, <iframe> tags
    // would be considered vulnerable in and of itself.
    request->do_not_prompt_for_login = true;
    request->load_flags |= net::LOAD_DO_NOT_USE_EMBEDDED_IDENTITY;
  }

So, now we have the basis of our magic trick.

We use a cross-site image resource (e.g. https://tryhttpauth.corp-intranet.com) into our login flow. If the image downloads successfully, we know the user’s browser has domain credentials and is willing to silently release them. If the image doesn’t download (because a HTTP/401 was returned and silently unanswered by the browser) then we know that we cannot use HTTP authentication and we must continue on to use the WebForms/WebAuthN authentication mechanism.

Update (Feb 2021): As of Chrome/Edge88, this magic trick will now fail if the user has configured their browser to “Block 3rd Party Cookies”, because the browser now treats a cross-origin authentication demand as if it were a cookie. Credentials for the cross-origin image will be omitted, and the browser will conclude that HTTP authentication is not available.

-Eric

1 Note that this magic trick is defeated if you enable the AllowCrossOriginAuthPrompt policy, because that policy permits the authentication prompt to be shown.

Post-Script: Prompting for Credentials vs. Approving for Release

As an aside, the HTTP Authentication prompt shown in this flow is more annoying than it strictly needs to be. What it’s usually really asking is “May I release your credentials to this site?“:

…but for implementation simplicity and historical reasons the prompt instead forces the user to retype their username and password.

Beating Private Mode Blockers with an Ephemeral Profile

Back in 2018, I explained how some websites use various tricks to detect that visitors are using Private Mode browsers and force such users to log-in. The most common reason that such sites do this is that they’ve implemented a “Your first five articles are free, then you have to pay” model, and cookies or similar storage are used to keep track of the user’s read count.

The New Yorker magazine is one such site:

Unfortunately, such “Private Mode blockers” make it hard for those of us who use Private Mode for other reasons (I don’t want to leave any traces of my Beanie Baby shopping research!). Private Mode detectors typically trigger for Chromium-based browsers’ Guest Profile that you might be use when borrowing a trusted friend’s computer.

So, what’s a privacy-conscious user to do?

If you’re using Firefox, you can use that browser’s “Containers” feature to isolate such sites into a partitioned container such that trackers from the site cannot follow you around the web.

If you use Microsoft Edge, you might consider creating your own “Ephemeral” browser profile for browsing sites that block InPrivate:

After you create the new profile, visit its Settings page at edge://settings/clearBrowsingDataOnClose and configure all storage areas to be cleared every time you close the browser1:

Note: Chrome does not offer a Clear on Close list, but does offer a limited Clear cookies and site data when you quit Chrome option.

You can then adjust any other settings you like, for instance, adjusting Tracking Protection to Strict in edge://settings/privacy or the like.

Then when you want to visit a site that blocks InPrivate, you can either open your Ephemeral profile from your profile icon, or use the Open link as command on a hyperlink’s context-menu:

Over time, browsers will continue to work to make Private Mode detectors less reliable, but it’s unlikely that they’ll ever be perfect. Creating an ephemeral profile that clears everything on exit is a useful trick to combat sites which prioritize their business model needs over your privacy.

-Eric

1 In Edge 85 and earlier, you must unfortunately close all browser windows (even from your main profile) to trigger the cleanup of your ephemeral profile; closing just the windows from the ephemeral profile alone is not enough. This bug was recently fixed in Edge 86.

Advanced Q&A

Q: How is this Ephemeral/ClearOnExit Profile different than a regular InPrivate Mode session?

A: There are a few key differences.

  1. InPrivate tries not to write anything to disk (although the OS memory manager might at any time decide swap process memory to the disk), while true profiles do not impose such a limitation. The “no disk write” behavior of Private Mode is the primary source of web-platform-observable differences in behavior that allow sites to build Private Mode detectors.
  2. By default, your default browser extensions do not load in InPrivate, but they can be configured to do so. In a different profile, you’ll have to install any desired extensions individually.
  3. By default, your credentials (usernames and passwords) do not autofill while InPrivate. In a different profile, your main profile’s credentials will not be available (and will be cleared on exit if configured to do so).
  4. InPrivate tabs do not perform Windows Integrated Authentication to Intranet sites automatically. Regular browser profiles do not have such a limitation.

Revealing Passwords

The Microsoft Edge browser, Edge Legacy, and Internet Explorer all offer a convenient mechanism for users to unmask their typing as they edit a password field:

Clicking the little eye icon disables the masking dots so that users can see the characters they’re typing:

This feature can be very useful for those of us who often mistype characters, and is especially important for users with various accessibility needs that can make error-free typing especially challenging. Keyboard users can hit ALT+F8 to toggle the reveal feature without using the mouse.

Nevertheless, Web Developers may disable this feature (for instance, if they offer their own version) by targeting the -ms-reveal pseudo element on an input type=password field:

.classNoReveal::-ms-reveal {
display: none;
}

If a site offers its own “reveal” feature, it should use CSS to hide the built-in feature to avoid confusing UI like this one:

Alternatively, sites may customize the Password Reveal Icon to better match their visual style.

Edge Legacy and Internet Explorer also respect a Windows policy (DisablePasswordReveal) that removes the password reveal button in various places throughout the system, including Edge Legacy and Internet Explorer. Some security configuration guides suggest setting this policy, arguing “Visible passwords may be seen by nearby persons, compromising them.” This is literally true; it is also true that such nearby persons might simply watch as the user types in their password manually.

Notably, this Windows policy is not respected by Edge 79 and above, so we’ve had a few questions about that. I’d like to point out a few non-obvious characteristics of this feature that might assuage security concerns.

The most obvious attack that administrators are worried about is that a passerby might use this mechanism to steal auto-filled passwords from an unlocked, unattended computer. This concern is misplaced1: when the browser’s Password Manager autofills a password, the reveal icon is removed:

The PasswordInputType code is smart too– an attacker cannot get the icon to appear by simply adding or deleting a few characters, it only reappears after the user completely removes all of the characters in the input field. The icon is hidden if the field is modified by JavaScript, and it’s hidden if focus leaves the input field.

All of these protections mean that the Password Reveal icon is unlikely to be abusable in any meaningful way. Of course, typing passwords at all is an anti-pattern– use the Password Manager to mitigate phishing attacks, and eliminate the use of passwords wherever possible.

-Eric

1 Notably, while concern about the reveal button is misplaced, it’s entirely possible to steal your own password using the Developer Tools or by running JavaScript from the omnibox.