web

For over twenty years, browsers broadly supported two features that were often convenient but sometimes accidentally invoked, leading to data loss.

The first feature was that hitting backspace would send the user back one page in their navigation history. This was convenient for those of us who keep our hands on the keyboard, but could lead to data loss– for instance, when filling out a web form, if focus accidentally left a text box, hitting backspace could result in navigating away from the form. (Some browsers tried to restore the contents of the form if the user understood what happened and hit “forward”, but this feature was imperfect and might not work for all forms). One of the browser UI leads complained about this behavior annually for a decade, and users periodically howled as they lost work. Finally, circa 2017, this feature was removed from Chrome and Microsoft Edge followed soon thereafter. If you happened to love the old behavior and accept the risk of data loss, you can restore it via extension.

The second feature was drop to navigate. Dragging and dropping a file into the browser’s content area (the body of the page) would, (unless the page’s JavaScript was designed to handle the drop, e.g. to upload it or process it locally), navigate to that local file in the current tab. Some folks like that behavior– e.g. web developers loading HTML files from their local filesystem, but it risks the same data loss problem. If a web page doesn’t accept file uploads via drag/drop, the contents of that page will be blown away by navigation. Back in 2015, a bug was filed against Chromium suggesting that the default behavior was too dangerous, and many examples were provided where the default behavior could be problematic. Yesterday, I landed a tiny change for Chromium 85 [later merged to v84] such that dropping a file or URL into the content area of a tab will instead open the file in a new tab:

Dropping in the content area now opens it in a new tab:

If you do want to replace the content of the tab with the dropped file, you can simply drag/drop the file to the tab strip.

A small white arrow shows you which tab’s content will be replaced:

Dropping the file between tabs on the tab strip will insert a new tab at the selected location:

Chrome (85.0.4163/v84) and Microsoft Edge (85.0.541) include this change; it was also later merged to v84. Microsoft Edge Legacy didn’t support drop to navigate. Firefox still has the old behavior; the closest bug I could find is this one. Safari 13.1.1 still has the old behavior and replaces the content of the current tab with the local file. Safari Tech Preview 13.2 Release 108 instead navigates the tab to an error page (NSPOSIXErrorDomain:1 Operation not permitted”).

-Eric

The Web Platform offers a great deal of power, and unfortunately evil websites go to great lengths to abuse it. One of the weakest (but simplest to implement) protections against such abuse is to block actions that were not preceded by a “User Gesture.” Such gestures (sometimes more precisely called User Activations) include a variety of simple actions, from clicking the mouse to typing a key; each interpreted as “The user tried to do something in this web content.”

A single user gesture can unlock any of a surprisingly wide array of privileged (“gated”) actions:

  • Allow a popup window to open
  • Allow an Application Protocol to be invoked
  • Allow an OnBeforeUnload dialog box to show
  • Allow the Vibration API to vibrate the device
  • Allow script to take the window fullscreen
  • Allow the password manager to fill the username/password into the page in a way that JavaScript can see them
  • Allow the page to prompt the user for a file to upload
  • Impact the behavior of file downloads (e.g. prompting)
  • and many more

So, when you see a site show a UI like this:

…chances are good that what they’re really trying to do is trick you into performing a gesture (mouse click) so they can perform a privileged action– in this case, open a popup ad in a new tab.

Some gestures are considered “consumable”, meaning that a single user action allows only one privileged action; subsequent privileged actions require another gesture.

Unfortunately, even this weak protection is subject to both false positives (an unwanted granting of privilege) and false negatives (an action is unexpectedly blocked).

You can learn more about this topic (and the complexity of dealing with nested frames, etc) in the original Chromium User Activation v2 spec, and the User-Activation section of HTML5.

-Eric

For the first few years of the web, developers pretty much coded whatever they thought was cool and shipped it. Specifications, if written at all, were an afterthought.

Then, for the next two decades, spec authors drafted increasingly elaborate specifications with optional features and extensibility points meant to be used to enable future work.

Unfortunately, browser and server developers often only implemented enough of the specs to ensure interoperability, and rarely tested that their code worked properly in the face of features and data allowed by the specs but not implemented in the popular clients.

Over the years, the web builders started to notice that specs’ extensibility points were rusting shut– if a new or upgraded client tried to make use of a new feature, or otherwise change what it sent as allowed by the specs, existing servers would fail when they saw the encountered the new values.

In light of this, spec authors came up with a clever idea: clients should send random dummy values allowed by the spec, causing spec-non-compliant servers that fail to properly ignore those values to fail immediately. This concept is called GREASE (with the backronym “Generate Random Extensions And Sustain Extensibility“), and was first implemented for the values sent by the TLS handshake. When connecting to servers, clients would claim to support new ciphersuites and handshake extensions, and intolerant servers would fail. Users would holler, and engineers could follow up with the broken site’s authors and developers about how to fix their code. To avoid “breaking the web” too broadly, GREASE is typically enabled experimentally at first, in Canary and Dev channels. Only after the scope of the breakages is better understood does the change get enabled for most users.

GREASE has proven such a success for TLS handshakes that the idea has started to appear in new places. Last week, the Chromium project turned on GREASE for HTTP2 in Canary/Dev for 50% of users, causing connection failures to many popular sites, including some run by Microsoft. These sites will need to be fixed in order to properly load in the new builds of Chromium.

// Enable "greasing" HTTP/2, that is, sending SETTINGS parameters with reserved identifiers and sending frames of reserved types, respectively. If greasing Frame types, an HTTP/2 frame with a reserved frame type will be sent after every HEADERS and SETTINGS frame. The same frame will be sent out on all connections to prevent the retry logic from hiding broken servers.
NETWORK_SWITCH(kHttp2GreaseSettings, "http2-grease-settings") NETWORK_SWITCH(kHttp2GreaseFrameType, "http2-grease-frame-type")

One interesting consequence of sending GREASE H2 Frames is that it requires moving the END_STREAM flag (recorded as fin=true in the netlog) from the HTTP2_SESSION_SEND_HEADERS frame into an empty (size=0) HTTP2_SESSION_SEND_DATA frame; unfortunately, the intervening GREASE Frame is not presently recorded in the netlog.

You can try H2 GREASE in Chrome Stable using command line flags that enable GREASE settings values and GREASE frames respectively:

chrome.exe --http2-grease-settings bing.com
chrome.exe --http2-grease-frame-type bing.com

Alternatively, you can disable the experiment in Dev/Canary:

chrome.exe --disable-features=Http2Grease

GREASE is baked into the new HTTP3 protocol (Cloudflare does it by default) and the new UA Client Hints specification (where it’s blowing up a fair number of sites). I expect we’ll see GREASE-like mechanisms appearing in most new web specs where there are optional or extensible features.

-Eric

Someone complained that a Japanese page is garbled in Edge/Chrome, but renders with the correct characters in Firefox and IE:

The problem is that Chromium is using an unexpected character set to interpret the response in the HTML Parser. That happens because the server doesn’t send a proper character set directive. To avoid problems like this and improve performance, document authors should specify the character set in the HTTP response headers:

Content-Type: text/html; charset=utf-8

If a charset isn’t specified in the headers, Chrome looks for a META CHARSET declaration within the response body text (“note the absurdity of encoding the character encoding in the document that you’re trying to decode“). The HTML5 spec demands that documents be encoded in UTF-8, and that the charset declaration, if any, appears within the first 1024 bytes of the response.

Chrome checks the full head for a character-set directive, and if it doesn’t find one, ensures that it’s looked through at least 1024 bytes before giving up.

Unfortunately, this site accidentally includes a div tag up in the head (ending the head section prematurely), and buries the charset down 1479 bytes into the response:

To avoid problems like this:

  1. Specify the CHARSET in the Content-Type response header, and
  2. Ensure META CHARSET appears as the first element of your HEAD.
  3. To avoid problems in legacy browsers, write it as utf-8 rather than as utf8.

Get your <HEAD> in order!

-Eric

While most HTTPS sites only authenticate the server (using a certificate sent by the website), HTTPS also supports a mutual authentication mode, whereby the client supplies a certificate that authenticates the visiting user’s identity. Such a certificate might be stored on a SmartCard, or used as a part of an OS identity feature like Windows Hello.

To request mutual authentication, servers send a CertificateRequest message to the client during the HTTPS handshake, specifying a criteria filter that the browser will use to find a client certificate to satisfy the server’s request.

If a client certificate is supplied in the browser’s Certificate response to the server’s challenge, the browser proves the user’s possession of that certificate using the private key that matches that client certificate’s public key.

A client may choose not to send a certificate (either because no matching certificate is available, or because the user declined to supply a certificate that it had)—in such cases, the server may terminate the handshake (showing a Client Certificate Required error message) or it may continue the handshake and attempt to authenticate the user via other means.

Certificate Selection

The CertificateRequest message allows the server to specify criteria for the certificates it is willing to accept from the client, including details such as the certificate’s issuer, and key/signature/hash types.

The browser consults the Operating System’s trust store (Keychain on Mac OS X, certmgr.msc on Windows) to find any candidate certificates (unexpired certificates with the Client Authentication purpose set and a private key available) that match the server-supplied filtering criteria:

The private key for a given certificate might be stored on a SmartCard — when a SmartCard is inserted, the certificate(s) on it are “virtually” propagated to the OS trust store for use by browsers and other applications.

Certificates that meet the server’s filtering criteria are shown in a prompt:

If the user hits “Cancel”, the handshake is completed without sending a certificate. However, if the user selects a certificate, the browser caches that decision for the lifetime of the browser instance. The selected certificate will be resent on all new connections to the target origin and the prompt will not be shown again.

Today, there’s no good way to clear the selection decision, short of restarting the browser entirely. In contrast, legacy IE offered two very awkward mechanisms, the Clear SSLState button in the Internet Control Panel, and the ClearAuthenticationCache web API.

Automatic Selection of Client Certificate

Internet Explorer and Edge Legacy offered a behavior (Don’t prompt for client certificate selection when only one certificate exists, URLACTION_CLIENT_CERT_PROMPT), on-by-default for the Local Intranet Zone:

…whereby the browser would not prompt the user to select a certificate if the user only has one certificate that matches the server’s request. In such cases, the client would automatically send the matching certificate without showing a prompt.

For other zones, IE and Edge Legacy do prompt the user to select a certificate before any certificate is sent. This is a privacy measure, because if the browser silently sends the user’s identity to any website that asks for it, this is a “super-cookie” that would allow tracking that user across sites. Also, the client’s certificate might directly contain personally identifiable information about the user (e.g. their email address, office phone number, home address, etc).

Chromium (and thus Chrome, Edge, Brave, Opera, Vivaldi) largely does not use the concept of Zones, so instead the AutoSelectCertificateForUrls policy exists. This policy allows an IT administrator to configure clients to automatically send certificates to specified websites that request them, which can be used to satisfy the need to have, say, the user’s Windows Hello certificate sent to *.login.microsoft.com sites.

Here are two examples: the first selects the first certificate issued by “Windows Hello PIN – MSIT1” and the second rule selects the certificate with a SubjectCN=”RSACSP”.

If you’re trying to set a rule whereby multiple client certificates are valid candidates and the client should just return the first found match, just add another rule with the same pattern and a different filter.

For instance, this set will use SubjectCN=”RSACP” if a matching certificate found, or a certificate with IssuerCN=”Windows Hello PIN – MSIT1” if not:

A screenshot of a cell phone

Description automatically generated

However, as you may have noticed, the AutoSelectCertificateForUrls policy has one significant limitation, which is that it always sends the user’s first matching certificate to the selected site. Some users might have more than one certificate that matches the policy (for instance, some enterprises have both “test” and “production” certificates.

To address this shortcoming, the Edge team introduced a new policy in Edge 81. The new ForceCertificatePromptsOnMultipleMatches policy which does as it says: If the client has multiple certificates that could be used to satisfy the {OriginFilter->CertificateFilter} policy specified by a AutoSelectCertificateForUrls policy, instead of simply sending the first matching certificate, the browser will instead show a certificate selection prompt filtered to the certificates that match the policy.

If you find that Microsoft Edge shows a client certificate selection prompt in one scenario where other browsers do not, one possibility is that the site in question is not actually requesting a client certificate from those other browsers for some reason. For instance, some web authentication flows, including Microsoft’s AAD login, take the browser’s User-Agent into account when deciding what authentication mechanisms to use with the client.

In order to understand exactly what’s going on with Client Authentication, collect Network Traffic logs. SSL_HANDSHAKE_MESSAGE_RECEIVED messages of type 13 represent the client certificate request.

-Eric

Bonus trivia

  1. Notably: Certificate selection policies apply across browser profiles, meaning that they are in force even when the user is in an Incognito browser session.
  2. PS: Client Certificate prompting behavior on Android is weird.

The general notion of “how Client Certificates were supposed to work” was that each user would have one certificate for each organization to which they belong, issued by that organization’s root certificate. When visiting that organization’s servers, the server would send in the CertificateRequest message the identifier(s) of the root certificate(s) to which acceptable client certificates chain (using the certificate_authorities structure). The visiting client would then filter the certificates available for selection to only those that chain to that root (hopefully one certificate).

So, say I have two certificates, e.g. USA-NationalID and Microsoft-EmployeeID. When I visit https://portal.microsoft.com, Microsoft sends a CertificateRequest with a MicrosoftRootCA in the certificate_authorities field. My browser automatically filters my client certificates list to just the Microsoft-EmployeeID certificate and then sends that. In contrast, when I visit https://irs.gov, the government sends a CertificateRequest with a USGovernmentRootCA in the certificate_authorities field. My browser automatically filters my client certificates list to just the USA-NationalID certificate and sends that.

In practice, unfortunately, things haven’t worked out that way. Most organizations have not had the infrastructure or discipline to configure things to work like that, and as a consequence you end up with varying client behavior.

Firefox doesn’t seem to filter the certificate list, but it does offer a “Remember this decision” checkbox which presumably reduces user annoyance:

Firefox does not respect the Windows Trust Store, so each client certificate must be manually loaded into Firefox’s configuration. This is a hassle, but it tends to result in a somewhat “cleaner” experience where the user isn’t distracted by random certificates that might be cluttering Windows’ cert store.

In some cases, organizations are generating invalid client certificates but expecting them to work, leading us to create compat accommodations like the FEATURE_CLIENTAUTHCERTFILTER Feature Control Key.

In the browser, SmartCards can be used for two ways: HTTPS Client Certificate Authentication, and Windows Integrated Authentication.

  • Straight TLS mutual authentication, as described above.
  • Windows Integrated Authentication that occurs when visiting a website that sends a WWW-Authenticate: Negotiate header. The client may automatically send the user’s login credentials (Intranet Zone). Or, if those creds do not work or the Zone is not configured for automatic credential release (Non-intranet), the user will be prompted for credentials to use. In Edge 79, the user would get a prompt with two blank fields (“Username” and “Password”). In Edge 80 or later, upon noticing that the user has configured Windows Hello, the user will be shown the Windows Hello auth dialog that allows the user to use their face, type a PIN, use a SmartCard, etc. So, now Edge 80 matches Edge Legacy (v18 and lower).

Low Level Details 1
Low Level Details 2

Nice discussion (with pictures) of setting up client cert auth on IIS.

In Windows 10 Apps, the AppContainer must have the sharedUserCertificates capability to use certificates from the trust store.

This is the first message the client can send after receiving a ServerHelloDone message. This message is only sent if the server requests a certificate. If no suitable certificate is available, the client MUST send a certificate message containing no certificates. That is, the certificate_list structure has a length of zero. If the client does not send any certificates, the server MAY at its discretion either continue the handshake without client authentication, or respond with a fatal handshake_failure alert. Also, if some aspect of the certificate chain was unacceptable (e.g., it was not signed by a known, trusted CA), the server MAY at its discretion either continue the handshake (considering the client unauthenticated) or send a fatal alert.

CertificateVerify signs using the client certificate’s private key.

CertOpenStore “my” store

https://docs.microsoft.com/en-us/windows/desktop/api/wincrypt/nf-wincrypt-certopenstore

ClientAuthIssuer trust store.

Hard Problems: Fetch in Serviceworker scenario — how can the user select a certificate when no UI is allowed?

Previously, I’ve described how to capture a network traffic log from Microsoft Edge, Google Chrome, and applications based on Chromium or Electron.

In this post, I aim to catalog some guidance for looking at these logs to help find the root cause of captured problems and otherwise make sense of the data collected.

Last Update: April 24, 2020I expect to update this post over time as I continue to gain experience in analyzing network logs.

Choose A Viewer – Fiddler or Catapult

After you’ve collected the net-export-log.json file using the about:net-export page in the browser, you’ll need to decide how to analyze it.

The NetLog file format consists of a JSON-encoded stream of event objects that are logged as interesting things happen in the network layer. At the start of the file there are dictionaries mapping integer IDs to symbolic constants, followed by event objects that make use of those IDs. As a consequence, it’s very rare that a human will be able to read anything interesting from a NetLog.json file using just a plaintext editor or even a JSON parser.

The most common (by far) approach to reading NetLogs is to use the Catapult NetLog Viewer, a HTML/JavaScript application which loads the JSON file and parses it into a much more readable set of events.

An alternative approach is to use the NetLog Importer for Telerik Fiddler.

Importing NetLogs to Fiddler

For Windows users who are familiar with Fiddler, the NetLog Importer extension for Fiddler is easy-to-use and it enables you to quickly visualize HTTP/HTTPS requests and responses. The steps are easy:

  1. Install the NetLog Importer,
  2. Open Fiddler, ideally in Viewer mode fiddler.exe -viewer
  3. Click File > Import > NetLog JSON
  4. Select the JSON file to import

In seconds, all of the HTTP/HTTPS traffic found in the capture will be presented for your review. If the log was compressed before it was sent to you, the importer will automatically extract the first JSON file from a chosen .ZIP or .GZ file, saving you a step.

In addition to the requests and responses parsed from the log, there are a number of pseudo-Sessions with a fake host of NETLOG that represent metadata extracted from the log:

These pseudo-sessions include:

  • RAW_JSON contains the raw constants and event data. You probably will never want to examine this view.
  • CAPTURE_INFO contains basic data about the date/time of the capture, what browser and OS version were used, and the command line arguments to the browser.
  • ENABLED_EXTENSIONS contains the list of extensions that are enabled in this browser instance. This entry will be missing if the log was captured using the –log-net-log command line argument.
  • URL_REQUESTS contains a dictionary mapping every event related to URL_REQUEST back to the URL Requests to which it belongs. This provides a different view of the events that were used in the parsing of the Web Sessions added to the traffic list.
  • SECURE_SOCKETS contains a list of all of the HTTPS sockets that were established for network requests, including the certificates sent by the server and the parameters requested of any client certificates. The Server certificates can be viewed by saving the contents of a –BEGIN CERTIFICATE– entry to a file named something.cer. Alternatively, select the line, hit CTRL+C, click Edit > Paste As Sessions, select the Certificates Inspector and press its Content Certificates button.

You can then use Fiddler’s UI to examine each of the Web Sessions.

Limitations

The NetLog format currently does not store request body bytes, so those will always be missing (e.g. on POST requests).

Unless the Include Raw Bytes option was selected by the user collecting the capture, all of the response bytes will be missing as well. Fiddler will show a “dropped” notice when the body bytes are missing:

If the user did not select the Include Cookies and Credentials option, any Cookie or Authorization headers will be stripped down to help protect private data:

Scenario: Finding URLs

You can use Fiddler’s full text search feature to look for URLs of interest if the traffic capture includes raw bytes. Otherwise, you can search the Request URLs and headers alone.

On any session, you can use Fiddler’s “P” keystroke (or the Select > Parent Request context menu command) to attempt to walk back to the request’s creator (e.g. referring HTML page).

You can look for the traffic_annotation value that reflects why a resource was requested by looking for the X-Netlog-Traffic_Annotation Session Flag.

Scenario: Cookie Issues

If Fiddler sees that cookies were not set or sent due to features like SameSiteByDefault cookies, it will make a note of that in the Session using a psuedo $NETLOG-CookieNotSent or $NETLOG-CookieNotSet header on the request or response:

Closing Notes

If you’re interested in learning more about this extension, see announcement blog post and the open-source code.

While the Fiddler Importer is very convenient for analyzing many types of problems, for others, you need to go deeper and look at the raw events in the log using the Catapult Viewer.


Viewing NetLogs with the Catapult Viewer

Opening NetLogs with the Catapult NetLog Viewer is even simpler:

  1. Navigate to the web viewer
  2. Select the JSON file to view

If you find yourself opening NetLogs routinely, you might consider using a shortcut to launch the Viewer in an “App Mode” browser instance: msedge.exe --app=https://netlog-viewer.appspot.com/#import

The App Mode instance is a standalone window which doesn’t contain tabs or other UI:

Note that the Catapult Viewer is a standalone HTML application. If you like, you can save it as a .HTML file on your local computer and use it even when completely disconnected from the Internet. The only advantage to loading it from appspot.com is that the version hosted there is updated from time to time.

Along the left side of the window are tabs that offer different views of the data– most of the action takes place on the Events tab.

Tips

If the problem only exists on one browser instance, check the Command Line parameters and Active Field trials sections on the Import tab to see if there’s an experimental flag that may have caused the breakage. Similarly, check the Modules tab to see if there are any browser extensions that might explain the problem.

Each URL Request has a traffic_annotation value which is a hash you can look up in annotations.xml. That annotation will help you find what part of Chromium generated the network request:

Most requests generated by web content will be generated by the blink_resource_loader, navigations will have navigation_url_loader and requests from features running in the browser process are likely to have other sources.

Scenario: DNS Issues

Look at the DNS tab on the left, and HOST_RESOLVER_IMPL_JOB entries in the Events tab.

One interesting fact: The DNS Error page performs an asynchronous probe to see whether the configured DNS provider is working generally. The Error page also has automatic retry logic; you’ll see a duplicate URL_REQUEST sent shortly after the failed one, with the VALIDATE_CACHE load flag added to it. In this way, you might see a DNS_PROBE_FINISHED_NXDOMAIN error magically disappear if the user’s router’s DNS flakes.

Scenario: Cookie Issues

Look for COOKIE_INCLUSION_STATUS events for details about each candidate cookie that was considered for sending or setting on a given URL Request. In particular, watch for cookies that were excluded due to SameSite or similar problems.

Scenario: HTTPS Handshaking Issues

Look for SSL_CONNECT_JOB and CERT_VERIFIER_JOB entries. Look at the raw TLS messages on the SOCKET entries.

Scenario: HTTPS Certificate Issues

Note: While NetLogs are great for capturing certs, you can also get the site’s certificate from the browser’s certificate error page.

The NetLog includes the certificates used for each HTTPS connection in the base64-encoded SSL_CERTIFICATES_RECEIVED events.

You can just copy paste each certificate (including ---BEGIN to END---) out to a text file, name it log.cer, and use the OS certificate viewer to view it. Or you can use Fiddler’s Inspector, as noted above.

If you’ve got Chromium’s repo, you can instead use the script at \src\net\tools\print_certificates.py to decode the certificates. There’s also a cert_verify_tool in the Chromium source you might build and try. For Mac, using verify-cert to check the cert and dump-trust-settings to check the state of the Root Trust Store might be useful.

In some cases, running the certificate through an analyzer like https://crt.sh/lintcert can flag relevant problems.

Scenario: Authentication Issues

Look for HTTP_AUTH_CONTROLLER events, and responses with the status codes 401, 403, and 407.

For instance, you might find that the authentication fails with ERR_INVALID_AUTH_CREDENTIALS unless you enable the browser’s DisableAuthNegotiateCnameLookup policy (Kerberos has long been very tricky).

Scenario: Debugging Proxy Configuration Issues

See Debugging the behavior of Proxy Configuration Scripts.


Got a great NetLog debugging tip I should include here? Please leave a comment and teach me!

-Eric

I’ve written about Browser Proxy Configuration a few times over the years, and I’m delighted that Chromium has accurate & up-to-date documentation for its proxy support.

One thing I’d like to call out is that Microsoft Edge’s new Chromium foundation introduces a convenient new debugging feature for debugging the behavior of Proxy AutoConfiguration (PAC) scripts.

To use it, simply add alert() calls to your PAC script, like so:

alert("!!!!!!!!! PAC script start parse !!!!!!!!");
function FindProxyForURL(url, host) {
alert("Got request for (" + url+ " with host: " + host + ")");
return "PROXY 127.0.0.1:8888";
}
alert("!!!!!!!!! PAC script done parse !!!!!!!!");

Then, collect a NetLog trace from the browser:

msedge.exe --log-net-log=C:\temp\logFull.json --net-log-capture-mode=IncludeSocketBytes

…and reproduce the problem.

Save the NetLog JSON file and reload it into the NetLog viewer. Search in the Events tab for PAC_JAVASCRIPT_ALERT events:

Even without adding new alert() calls, you can also look for HTTP_STREAM_JOB_CONTROLLER_PROXY_SERVER_RESOLVED events to see what proxy the proxy resolution process determined should be used.

One current limitation of the current logging is that if the V8 Proxy Resolver process…

… crashes (e.g. because Citrix injected a DLL into it), there’s no mention of that crash in the NetLog; it will just show DIRECT. Until the logging is enhanced, users can hit SHIFT+ESC to launch the browser’s task manager and check to see whether the utility process is alive.

Try using the System Resolver

In some cases (e.g. when using DirectAccess), you might want to try using Windows’ proxy resolution code rather than the code within Chromium.

The --winhttp-proxy-resolver command line argument will direct Chrome/Edge to call out to Windows’ WinHTTP Proxy Service for PAC processing.

Differences in WPAD/PAC Processing

  • The WinHTTP Proxy Service caches proxy authentication credentials and reuses them across browser launches; Chromium does not.
  • The WinHTTP Proxy Service caches WPAD determination across process launches. Chromium does not and will need to redetect the proxy each time the browser reopens.
  • Internet Explorer/WinINET/Edge Legacy call the PAC script’s FindProxyForURLEx function (introduced to unlock IPv6 support), if present, and FindProxyForURL if not.
  • Chrome/Edge/Firefox only call the FindProxyForURL function and do not call the Ex version.
  • Internet Explorer/WinINET/Edge Legacy expose a getClientVersion API that is not defined in other PAC environments.
  • Chrome/Edge may return different results than IE/WinINET/EdgeLegacy from the myIpAddress function when connected to a VPN.

Notes for Other Browsers

  • Prior to Windows 8, IE showed PAC alert() notices in a modal dialog box. It no longer does so and alert() is a no-op.
  • Firefox shows alert() messages in the Browser Console (hit Ctrl+Shift+J); note that Firefox’s Browser Console is not the Web Console where web pages’ console.log statements are shown.

-Eric

While lately I’ve been endlessly streaming the latest news with horrified fascination, this morning my calendar unexpectedly popped up a reminder set over a year ago… Today is the fifteenth anniversary of my big-league blogging debut on the Internet Explorer Team’s blog.

My first post there, “A HTTP Detective Story” remains one of my favorites. Sadly, its topic feels all too familiar: a website took a dependency upon a browser quirk, the Referer and User-Agent headers were critical elements of the repro, and I used Fiddler to root-cause the problem. I’ve learned (and shared) so much more about these topics over the last fifteen years, and I appreciate the readers who’ve followed me from the IEBlog to IEInternals (237 posts) to Telerik’s Fiddler blog (41 posts) to my book, to this, my newest blog, TextSlashPlain (189 posts and counting!).

Here’s to learning and sharing for the next 15 years!

With gratitude,

-Eric

Brave, Mozilla Firefox, Google Chrome and Microsoft Edge presented on our current privacy work at the Enigma 2020 conference in late January. The talks were mostly high-level, but there were a few feature-level slides for each browser.

My ~10 minute presentation on Microsoft Edge was first, followed by Firefox, Chrome, and Brave.

At 40 minutes in a 35min Q&A session starts, first with questions from the panel moderator, followed by questions from the audience.