privacy

Type https://example.com in your web browser’s address bar and hit enter.

What happens?

Before connecting to the example.com server, your browser must convert “example.com” to the network address at which that server is located.

dns

It does this lookup using a protocol called “DNS.” Today, most DNS transactions are conducted in plaintext (not encrypted) by sending UDP messages to the DNS resolver your computer is configured to use.

There are a number of problems with the 36-year-old DNS protocol, but a key one is that the unencrypted use of UDP traffic means that network intermediaries can see (and potentially modify) your lookups, such that attackers can know where you’re browsing, and potentially even direct your traffic to some other server.

The DNS-over-HTTPS (DoH) protocol attempts to address some of these problems by sending DNS traffic over a HTTPS connection to the DNS resolver. The encryption of a TLS connection helps prevent network intermediaries from knowing what addresses your browser is looking up– your queries are private between your PC and the DNS resolver that is providing the answers. The expressiveness of HTTP (with request and response headers) provides interesting options for future extensibility, and the modern HTTP2 and HTTP3 protocols aim to provide high-performance and parallel transactions with a single connection.

Try It

Support for DNS-over-HTTPS is coming to many browsers and operating systems (including a future version of Windows). You can even try DoH out in the newest version of Microsoft Edge (v79+) by starting the browser with a special command line flag. The following command line will start the browser and instruct it to perform DNS lookups using the Cloudflare DoH server:

msedge.exe --enable-features="DnsOverHttps<DoHTrial" --force-fieldtrials="DoHTrial/Group1" --force-fieldtrial-params="DoHTrial.Group1:Fallback/false/Templates/https%3A%2F%cloudflare-dns.com%2Fdns-query"

You can test to see whether the feature is working as expected by visiting https://1.1.1.1/help. Unfortunately, this command line flag presently only works on unmanaged PCs, meaning it doesn’t do anything from PCs that are joined to a Windows domain.

Some Thoughts, In No Particular Order

Long-time readers of this blog know that I want to “HTTPS ALL THE THINGS” and DNS is no exception. Unfortunately, as with most protocol transitions, this turns out to be very very complicated.

SNI

The privacy benefits of DNS-over-HTTPS are predicated on the idea that a network observer, blinded from your DNS lookups by encryption, will not be able to see where you’re browsing.

Unfortunately, network observers, by definition, can observe your traffic, even if it’s encrypted.

The network observer will still see the IP addresses you’re connecting to, and that’s often sufficient to know what sites you’re browsing.

Worse, they are usually still able to tell what specific HTTPS site you’re visiting on that IP address. That’s because one of the current limitations of HTTPS is that the browser sends, in unencrypted form, the hostname it expects to see in the server’s certificate as a part of the ClientHello message in the HTTPS handshake. Closing this Server Name Indication (SNI) hole requires implementation of Encrypted SNI (ESNI) and this feature is not yet implemented in Chromium.

Privacy From Observers, Not the Resolver

If your Internet Service Provider (say, for example, Comcast) is configured to offer DNS-over-HTTPS, and your browser uses their resolver, your network lookups are protected from observers on the local network, but not from the Comcast resolver.

Because the data handling practices of resolvers are often opaque, and because there are business incentives for resolvers to make use of lookup data (for advertising targeting or analytics revenue), it could be the case that the very actor you are trying to hide your traffic from (e.g. your ISP) is exactly the one holding the encryption key you’re using to encrypt the lookup traffic.

To address this, some users choose to send their traffic not to the default resolver their device is configured to use (typically provided by the ISP) but instead send the lookups to a “Public Resolver” provided by a third-party with a stronger privacy promise.

However, this introduces its own complexities.

Public Resolvers Don’t Know Private Addresses

A key problem in the deployment of DNS-over-HTTPS is that public resolvers (Google Public DNS, Cloudflare, Open DNS, etc) cannot know the addresses of servers that are within an intranet. If your browser attempts to look up a hostname on your intranet (say MySecretServer.intranet.MyCo.com) using the public resolver, the public resolver not only gets information about your internal network (e.g. now Google knows that you have a server called MySecretServer.intranet) but it also returns “Sorry, never heard of it.” At this point, your browser has to decide what to do next. It might fail entirely (“Sorry, site not found”) or it might “Fail open” and perform a plain UDP lookup using the system-configured resolver provided by e.g. your corporate network administrator.

This fallback means that a network attacker might simply block your DoH traffic such that you perform all of your queries in unprotected fashion. Not great.

Even alerting the user to such a problem is tricky: What could the browser even say that a human might understand? “Nerdy McNerdy Nerd Nerd Nerd Nerd Nerd Address Nerd Resolution Nerd Geek. Privacy. Network. Nerdery. Geekery. Continue?”

Centralization Isn’t Great

Centralizing DNS resolutions to the (relatively small) set of public DNS providers is contentious, at best. Some European jurisdictions are uncomfortable about the idea that their citizens’ DNS lookups might be sent to an American tech giant.

Some privacy-focused users are primarily worried about the internet giants (e.g. Google, Cloudflare) and are very nervous that the rise of DoH will result in browsers sending traffic to these resolvers by default. Google has said they won’t do that in Chrome, while Firefox is experimenting with using Cloudflare by default in some locales.

Content Filtering

Historically, DNS resolutions were a convenient choke point for schools, corporations, and parents to implement content filtering policies. By interfering with DNS lookups for sites that network users are forbidden to visit (e.g adult content, sites that put the user’s security at risk, or sites that might result in legal liability for the organization), these organizations were able to easily prevent non-savvy users from connecting to unwanted sites. Using DoH to a Public DNS provider bypasses these types of content filters, leaving the organization with unappealing choices: start using lower-granularity network interception (e.g. blocking by IP addresses), installing content-filters on the user’s devices directly, or attempting to block DoH resolvers entirely and forcing the user’s devices to fall back to the filtered resolver.

Geo CDNs and Other Tricks

In the past, DNS was one mechanism that a geographically distributed CDN could use to load-balance its traffic such that users get the “best” answers for their current locale. For instance, if the resolver was answering a query from a user in Australia, it might return a different server address than when resolving a query from a user in Florida.

These schemes and others get more complicated when the user isn’t using a local DNS resolver and is instead using a central public resolver, possibly provided by a competitor to the sites that the user is trying to visit.

Don’t Despair

Despite these challenges and others, DNS-over-HTTPS represents an improvement over the status quo, and as browser and OS engineering teams and standards bodies invest in addressing these problems, we can expect that deployment and use of DoH will grow more common in the coming years.

DoH will eventually be a part of a more private and secure web.

-Eric Lawrence

 

As your browser navigates from page to page, servers are informed of the URL from where you’ve come from using the Referer HTTP header1; the document.referrer DOM property reveals the same information to JavaScript.

Similarly, as the browser downloads the resources (images, styles, JavaScript) within webpages, the Referer header on the request allows the resource’s server to determine which page is requesting the resource.

The Referrer is omitted in some cases, including:

  • When the user navigates via some mechanism other than a link in the page (e.g. choosing a bookmark or using the address box)
  • When navigating from HTTPS pages to HTTP pages
  • When navigating from a resource served by a protocol other than HTTP(S)
  • When the page opts-out (details in a moment)

Usefulness

The Referrer mechanism can be very useful, because it helps a site owner understand from where their traffic is originating. For instance, WordPress automatically generates this dashboard which shows me where my blog gets its visitors:

BlogStats

I can see not only which Search Engines send me the most users, but also which specific posts on Reddit are driving traffic my way.

Privacy Implications

Unfortunately, this default behavior has a significant impact on privacy, because it can potentially leak private and important information.

Imagine, for example, that you’re reviewing a document your mergers and acquisitions department has authored, with the URL https://contoso.com/​Q4/PotentialAcquisitionTargetsUpTo5M.docx. Within that document, there might have a link to https://fabrikam.com​/financialdisclosures.htm. If you were to click that link, the navigation request to Fabrikam’s server will contain the full URL of the document that led you there, potentially revealing information that your firm would’ve preferred to keep quiet.

Similarly, your search queries might contain something you don’t mind Bing knowing (“Am I required to disclose a disease before signing up for HumongousInsurance.com?”) but that you didn’t want to immediately reveal to the site where you’re looking for answers.

If your web-based email reader puts your email address in the URL, or includes the subject of the current email, links you click in that email might be leaking information you wish to keep private.

The list goes on and on.

Referrer Policy

Websites have always had ways to avoid leaking information to navigation targets, usually involving nonstandard navigation mechanisms (e.g. meta refresh) or by wrapping all links so that they go through an innocuous page (e.g. https://example.net/offsitelink.aspx).

However, these mechanisms were non-standard, cumbersome, and would not control the referrer information sent when downloading resources embedded in pages. To address these limitations, Referrer Policy was developed and implemented by most browsers2.

CanIUseRP

Referrer Policy allows a website to control what information is sent in Referer headers and exposed to the document.referrer property. As noted in the spec, the policy can be specified in several ways:

  • Via the Referrer-Policy HTTP response header.
  • Via a meta element with a name of referrer.
  • Via a referrerpolicy content attribute on an aareaimgiframe, or link element.
  • Via the noreferrer link relation on an aarea, or link element.
  • Implicitly, via inheritance.

The policy can be any of the following:

  • no-referrer – Do not send a Referer.
  • unsafe-url – Send the full URL (lacking only auth info and fragment), even on navigations from HTTPS to HTTP.
  • no-referrer-when-downgrade – Don’t send the Referer when navigating from HTTPS to HTTP. [The longstanding default behavior of browsers.]
  • strict-origin-when-cross-origin – For a same-origin navigation, send the URL. For a cross-origin navigation, send only the Origin of the referring page. Send nothing when navigating from HTTPS to HTTP. [Spoiler alert: The new default.]
  • origin-when-cross-origin For a same-origin navigation, send the URL. For a cross-origin navigation, send only the Origin of the referring page. Send the Referer even when navigating from HTTPS to HTTP.
  • same-origin – Send the Referer only for same-origin navigations.
  • origin – Send only the Origin of the referring page.
  • strict-origin – Send only the Origin of the referring page; send nothing when navigating from HTTPS to HTTP.
  • empty string – Inherit, or use the default

As you can see, there are quite a few policies. That’s partly due to the strict- variations which prevent leaking even the origin information on HTTPS->HTTP navigations.

Improving Defaults

With this background out of the way, the Chromium team has announced that they plan to change the default Referrer Policy from no-referrer-when-downgrade to strict-origin-when-cross-origin. This means that cross-origin navigations will no longer reveal path or query string information, significantly reducing the possibility of unexpected leaks.

As with other big privacy changes, this change is slated to ship in v80, the code has been in for five years and you can enable it in Chrome 78+ and Edge 78+:

  1. Visit chrome://flags/#reduced-referrer-granularity
  2. Set the feature to Enabled
  3. Restart your browser

Flag

I’ve published a few toy test cases for playing with Referrer Policy here.

As noted in their Intent To Implement, the Chrome team are not the first to make changes here. As of Firefox 70 (Oct 2019), the default referrer policy is set to strict-origin-when-cross-origin, but only for requests to known-tracking domains, OR while in Private mode. In Safari ITP, all cross-site HTTP referrers and all cross-site document.referrers are downgraded to origin. Brave forges the Referer (sending the Origin of the target, not the source) when loading of cross-origin resources.

Understand the Limits

Note that this new default is “opt-out”– a page can still choose to send unrestricted referral URLs if it chooses. As an author, I selfishly hope that sites like Reddit and Hacker News might do so.

Also note that this new default does not in any way limit JavaScript’s access to the current page‘s URL. If your page at https://contoso.com/SuperSecretDoc.aspx includes a tracking script:

script

… the HTTPS request for track.js will send Referer: https://contoso.com/, but when the script runs, it will have access to the full URL of its execution context (https://contoso.com/SuperSecretDoc.aspx) via the window.location.href property.

Test Your Sites

If you’re a web developer, you should test your sites in this new configuration and update them if anything is unexpectedly broken. If you want the browser to behave as it used to, you can use any of the policy-specification mechanisms to request no-referrer-when-downgrade behavior for either an entire page or individual links.

Or, you might pick an even stricter policy (e.g. same-origin) if you want to prevent even the origin information from leaking out on a cross-site basis. You might consider using this on your Intranet, for instance, to help prevent the hostnames of your Intranet servers from being sent out to public Internet sites.

Stay private out there!

-Eric

1 The misspelling of the HTTP header name is a historical accident which was never corrected.

2 Notably, Safari, IE11, and versions of Edge 18 and below only supported an older draft of the Referrer policy spec, with tokens never (matching no-referrer), always (matching unsafe-url), origin (unchanged) and default (matching no-referrer-when-downgrade). Edge 18 supported origin-when-cross-origin, but only for resource subdownloads.

The Chrome team is embarking on a clever and bold plan to change the recipe for cookies. It’s one of the most consequential changes to the web platform in almost a decade, but with any luck, users won’t notice anything has changed.

But if you’re a web developer, you should start testing your sites and services now to help ensure a smooth transition.

What’s this all about?

As originally designed, cookies were very simple. When a browser made a request to a website, that website could return a tiny piece of text, called a cookie, to the browser. When the web browser subsequently requested any resource from that website, the cookie string would be echoed back to the server that first sent it.

Simple, right?

A bit too simple, as it turns out.

Browser designers have spent the last two decades trying to clear up the mess that this one simple feature causes, and alternatives might never gain adoption.

There are two major classes of problem with the design of cookies: Privacy, and Security.

Privacy

The top privacy problem is that cookies are sent every time a request is made for a resource, even if that request is made from a completely different context. So, if you visit A.example.com, that page might request a tracking pixel from ad.doubleclick.net. This tracking pixel might set a cookie. The tracking pixel’s cookie is called a third party cookie because it was set by a domain unrelated to the page itself.

If you later visit B.textslashplain.com, which also contains a tracking pixel from ad.doubleclick.net, the tracking pixel’s cookie set on your visit to A.example.com is sent to ad.doubleclick.net, and now that tracker knows that you’ve visited both sites. As you browse more and more sites that contain a tracking pixel from the same provider, that provider can build up a very complete profile of the sites you like to visit, and use that information to target ads to you, sell the data to a data aggregation company, etc.

Today, Brave blocks 3rd party cookies by default, while Safari’s ITP feature does something more intricate. Firefox and the new Edge have “Tracking Prevention” features that block 3rd-party cookies from known trackers.

Most browsers offer a setting to turn off ALL third party cookies, and older versions of Internet Explorer used to P3P block cookies that did not promise to abide by reasonable privacy protections. However, almost all users leave 3rd party cookies enabled, and enough sites sent fraudulent P3P declarations that the P3P support was ripped out of the only browser that supported it.

Security

The security problems with cookies are a bit more subtle.

In most cases, after you log in, a site will store your identity in a cookie, such that you don’t have to reenter your password on every page, or retap your security key every time you do anything. An authentication token is stored in a cookie, and each request you make to a site carries that cookie and token.

The problem is that this creates the possibility of a cross site request forgery attack, in which an attacker carefully crafts a request to a website to which you are logged in. When you visit the attacker’s site (say, to read a news article or view an image link posted to your social media feed), the attacker’s page instructs your browser to send its malicious request (“Transfer $1000 from me to @badguy”) to the victim site where you are logged in (e.g. https://bank.example).

Normally, such a request would be ignored or responded to by a demand for credentials, but because your browser is already logged in to bank.example and because your browser made the request, the server receives the cookie containing your authentication token and deems the request legitimate. You’ve been robbed! This class of attack is called the confused deputy attack.

Now, there are myriad ways to protect against this problem, but they all require careful work on the part of web application developers, and a long history of exploits shows that failing to protect against CSRF is a common mistake.

Other Privacy and Security Problems

I’ve described the biggest and most prominent of the security and privacy problems with the design of cookies, but those are just the tip of the iceberg. There are many other problems, including:

  • Because cookies are sent on plaintext HTTP requests, a passive network observer can watch your cookies as they’re sent over the network and correlate requests back to a single client. The NSA famously abused this vulnerability.
  • Cookies are sent to servers which respond with data that might allow Cross Site Leaks or violations of Same Origin Policy. For instance, an attacker might include in their page several guesses as to your identity. If they guess right (e.g. your cookie matches the URL identity), the images loads and now the attacker site knows exactly who you are:
    WordPressCorruptsMarkup
  • Cookies can be trivially stolen in a XSS Attack if the HTTPOnly attribute was not set.
  • Cookies can be trivially leaked if the server forget to set the Secure attribute and the site isn’t on the HSTS preload list.
  • Same Origin Policy blocks reading of cross-origin resources, but this depends on the integrity of the browser sandbox. Attacks like Spectre weaken this guarantee. Ambient authentication (like cross-origin cookies) weakens Cross-Origin-Read-Blocking‘s ability to prevent a compromised renderer from stealing data.

Cookie Design Improvements

Over the years, browsers have introduced more and more features and toggles to help lock down cookies, from the Secure and HTTPOnly attributes to Cookie Prefixes (nee Magic Named Cookies).

Perhaps the most promising improvement is a feature called SameSite cookies, by which cookies can opt-in to being sent only in a first-party context:

Set-Cookie: __HostAuth=F123ABCA; SameSite=Strict; secure; httponly;

This helps protect your site against CSRF attacks and helps mitigate leakage of the user’s identity in a cross-site context.

There’s a nice SameSite cookie explainer (with pictures!).

While broadly supported by browsers, the SameSite directive isn’t getting used everywhere it should be.

Big Changes are Coming!

So the Chrome folks plan to change that.

In Chrome 80 and later, cookies will default to SameSite=Lax. This means that cookies will automatically be sent only in a first party context unless they opt-out by explicitly setting a directive of None:

Set-Cookie: ACookieAvailableCrossSite; SameSite=None; secure; httponly


This change is small in size, and huge in scope. It has huge implications for any site that expects its cookies to be used in a cross-origin context.

 

What’s the Immediate Good?

In one fell swoop, many websites will get more secure. Sites that were previously vulnerable to CSRF and cross-site leak attacks will be protected from attack in the most popular browser.

Privacy improves, because setting and sending of 3rd party cookies are blocked-by-default:

SubframeCookiesBlocked

Who’s on board?

We plan to match this change in default for the new Edge browser (with experiments starting in v80). However, there’s no plan to match this change for Internet Explorer (please stop using it!) or the old Edge (v18 and earlier).

Per Chromium’s Intent-to-Implement announcement, Firefox is looking into matching the change, although they’ve joking-not-joking suggested that they’re going to let Chrome lead the charge (and bear the brunt of the compatibility impact) before turning the feature on by default.

Per the I2I, Safari has not yet weighed in on this change. Safari’s ITP feature already imposes many interesting restrictions on cross-site cookies.

What Can Go Wrong?

If users visit a site that expects its cookies to be available but the cookies are missing, users might get a confusing error message suggesting that they toggle a setting that won’t help:

teamserror

Or, the site might just redirect between its identity provider and itself forever.

These sorts of problems happen on sites that use Federated Identity providers that depend on accessing cookies from 3rd-party subframes:

federatedid-1

To fix this, the identity provider site will either need to set SameSite=None on its cookies, or will need to use a browser storage feature (e.g. localStorage) that is not impacted by this change. Please note that other browser features do impact the availability of DOM Storage, so it’s not a silver bullet.

Rollout Plan

The Chrome team has set an ambitious timeline which calls for turning this feature on-by-default for Chrome 80, slated for stable release on February 4th, 2020. Chrome’s rollout plan includes enabling the new default on an experimental basis in Chrome 79 pre-release [Beta/Dev/Stable] channels for machines which are not externally managed — attempting to defer walking into the compatibility minefield of enterprise intranets. Presently, this exclusion does not apply to machines that are domain-joined via AAD due to a limitation in Chromium.

The Chrome team also announced two enterprise policies for Chrome 80 that will allow admins to opt-out of the new default entirely, or to opt-out only specific sites. Edge expects to offer these same policies.

Developer Tooling

Developers who wish to enable the SameSite-by-Default feature locally for testing purposes can do so by visiting chrome://flags and searching for SameSite:

flags

Set the SameSite by default cookies feature to Enabled and restart the browser.

You can view the cookies used by the current page using the Application tab of the Developer Tools; the column at the far right shows the declared SameSite attribute:

F12Tab

The Chrome team have enabled logging in the Developer Tools to notify web developers that cookie behavior is changing. Visit chrome://flags/cookie-deprecation-messages to ensure that the warnings are enabled:

CookieMessages

If you then explore my test page, you can see the notices from the tools:

DebugInfo

The Cookies subtab of the Network tab picked up a new checkbox “show filtered out request cookies” which allows you to see (in yellow) which cookies were not sent for the selected request due to SameSite rules:

CookiesTab

Cookies restricted by SameSite rules are also logged in NetLog captures (issue 1005217).

Problems and Accommodations

While the vast majority of cookie scenarios will continue to work as expected, compatibility breaks are inevitable. Unfortunately, some of these breaks might not be trivially fixed by adding the SameSite=None attribute.

For instance, older versions of Safari treated SameSite=None as SameSite=Strict, which means that servers must avoid sending the None token to Safari 12.

The .NET Framework’s cookie writer used to simply omit the SameSite attribute when the SameSiteMode was “None.” Changing this will require the affected sites to update their framework to a version with the patch.

Early in our investigations, we found another problem related to how SameSite impacts cookies sent while navigating.

Specifically, over a million sites first set an anti-CSRF cookie on themselves, then redirect to a federated Login provider, then the Login provider POSTs the login information back to the site. That initial anti-CSRF cookie is only meant to be used in a first party context. Crucially, however, SameSite cookies are not sent on navigations if the navigations use the HTTP POST verb. Making the anti-CSRF cookies SameSite=Lax by default breaks this scenario and thus breaks tons of websites.

AntiCSRF

The Two Minute Mitigation

Demanding these security cookies be set to SameSite=None would be both onerous (many more sites would need to change) and misleading (because these cookies are really only meant to go to a 1st party context).

To address this breakage, the new default was adjusted to allow a SameSite-Lax-by-Default cookie to be sent on a subsequent POST requests for two minutes, significantly reducing the breakage without giving up all of the security benefit of the change.

Note that the 2 minute mitigation might not be enough for login scenarios that take longer than 2 minutes. For instance, consider the case where the login flow is happening in a background tab, or you have to fetch your Security Key, or a child must get a parent’s permission, etc. Sites that wish to handle scenarios like this will need to store a copy of their anti-CSRF token elsewhere (e.g. sessionStorage seems appropriate).

The Chrome team plans to eventually remove the 2 minute mitigation entirely.

Compat Landmine: document.cookie

In Firefox and Safari, the document.cookie DOM property matches the Cookie header, including omission of cookies that were restricted by SameSite navigation rules.

In contrast, in Chrome and Edge, SameSite cookies that are omitted from the Cookie header are still included in the document.cookie collection following a cross-origin navigation. I’ve been convinced that this actually makes more sense, although the reasoning is subtle [issue].

To the extent possible, you should set your cookies with the httponly attribute, so that they’re not available to JavaScript and this compatibility difference is irrelevant.

What’s Next?

Assuming the rollout happens on schedule, users will get more security and more privacy. But that’s just the start.

The next step is to combat the “non-secure-cookies-are-trackable” attack mentioned previously. To prevent non-secure cross-site cookies being used by network observers to follow users around the web, SameSite=None cookies will be blocked if set without the Secure attribute. Chrome’s timeline for enabling this change by default seems squishier, but ChromeStatus claims it is also slated for Chrome 80.

After that, the next step is to combat the inevitable abuse by trackers.

Because trackers can simply opt their own cookies out of restrictions by setting SameSite=None, trackers will do so. But this isn’t is bad as you think– by forcing sites to explicitly declare each cookie for which cross-site use is intended, browsers can then focus extra love and attention around such cookies.

If we peek at Chrome’s flags page today, we see an interesting hint about the Chrome team’s plans:

SSNoneSwitch

Enabling that option enables EnableRemovingAllThirdPartyCookies which adds a new Remove Third-Party Cookies button to the All cookies and site data page. When clicked, it pops the following dialog:Delete3rdPartyButton

The HandleRemoveThirdParty() function invoked by the Clear button clears not only the cookies from those domains, but also all of the site data for the sites on which those cookies existed. This provides a strong disincentive for sites to opt-in to SameSite=None cookies unless they really need to.

(Disclaimer: Chrome might never launch the “Clear third-party cookies” button, but it’s in the code today).

You can expect that browser designers will soon dream up new and interesting remediations for cookies marked for access across multiple sites. For instance, browsers could limit the lifetime of those cookies to days or hours, or even a single browser session (tricky).

We live in interesting times!

Update: Chrome pushed back experimenting with this feature from Chrome 78 to Chrome 79.

Update: The Chromium team have published a post about the new SameSite cookie default on their blog, including a list of incompatible legacy browsers that refuse to accept cookies that specify SameSite=None, and guidance on their rollout timeline.

-Eric

PS: If you’re a Fiddler user, you can use this script to easily visualize which cookies are being set with which SameSite values.

Cookie-dough hero photo by Pam Menegakis on Unsplash.

Note: This post is part of a series about Web-to-App Communication techniques.

Just over eight years ago, I wrote my last blog post about App Protocols, a class of URL schemes that typically1 open another program on your computer instead of returning data to the web browser. 

App Protocols2 are both simple and powerful, allowing client app developers to easily enable the invocation of their apps from a website. For instance, ms-screenclip is a simple app protocol built into Windows 10 that kicks off the process of taking a screenshot:

    ms-screenclip:?delayInSeconds=2

When the user invokes this url, the handler waits two seconds, then launches its UI to collect a screenshot. Notably, App Protocols are fire-and-forgetmeaning that the handler has no direct way to return data back to the browser that invoked the protocol.

The power and simplicity of App Protocols comes at a cost. They are the easiest route out of browser sandboxes and are thus terrifying, especially because this exploit vector is stable and available in every browser from legacy IE to the very latest versions of Chrome/Firefox/Edge/Safari.

What’s the Security Risk?

A number of issues make App Protocols especially risky from a security point-of-view.

Careless App Implementation

The primary security problem is that most App Protocols were designed to address a particular scenario (e.g. a “Meet Now” page on a videoconferencing vendor’s website should launch the videoconferencing client) and they were not designed with the expectation that the app could be exposed to potentially dangerous data from the web at large.

We’ve seen apps where the app will silently reconfigure itself (e.g. sending your outbound mail to a different server) based on parameters in the URL it receives. We’ve seen apps where the app will immediately create or delete files without first confirming the irreversible operation with the user. We’ve seen apps that assumed they’d never get more than 255 characters in their URLs and had buffer-overflows leading to Remote Code Execution when that limit was exceeded. The list goes on and on.

Poor API Contract

In most cases3, App Protocols are implemented as a simple mapping between the protocol scheme (e.g. “alert”) and a shell command, e.g. 

AlertProtocol

When the protocol is invoked by the browser, it simply bundles up the URL and passes it on the command line to the target application. The app doesn’t get any information about the caller (e.g. “What browser or app invoked this?“, “What origin invoked this?“, etc) and thus it must make any decisions solely on the basis of the received URL.

Until recently, there was an even bigger problem here, related to the encoding of the URL. Browsers, including Chrome, Edge, and IE, were willing to include bare spaces and quotation marks in the URL argument, meaning that an app could launch with a command line like:

alert.exe "alert:This is an Evil URL" --DoSomethingDangerous --Ignore=This"

The app’s code saw the –DoSomethingDangerous “argument”, failed to recognize it as a part of the URL, and invoked dangerous functionality. This attack led to remote code execution bugs in App Protocol handlers many times over the years. 

Chrome began %-escaping spaces and quotation marks8 back in Chrome 64, and Edge 18 followed suit in Windows 10 RS5.

You can see how your browser behaves using the links on this test page.

Future Opportunity: A richer API contract that allows an App Protocol handler to determine how specifically it was invoked would allow it to better protect itself from unexpected callers. Moving the App Protocol URL data from the command line to somewhere else (e.g. stdin) might help reduce the possibility of parsing errors.

Sandbox

The application that handles the protocol typically runs outside of the browser’s sandbox. This means that a security vulnerability in the app can be exploited to steal or corrupt any data the user can access, install malware, etc. If the browser is running Elevated (at Administrator), any App Protocol handlers it invokes are launched Elevated; this is part of UAC’s design.

Because most apps are written in native code, the result is that most protocol handlers end up in the DOOM! portion of this diagram:

RuleOfTwo

Prompting

In most cases, the only4 thing standing between the user and disaster is a permission prompt.

In Internet Explorer, the prompt looked like this:

IEPermission
As you can see, the dialog provides a bunch of context about what’s happening, including the raw URL, the name of the handler, and a remark that allowing the launch may harm the computer.

Such information is lacking in more modern browsers, including Firefox:

FirefoxPermission

…and Edge/Chrome:

ChromePermission

Browser UI designers (reasonably) argue that the vast majority of users are poorly equipped to make trust decisions, even with the information in the IE prompt, and so modern UI has been greatly simplified5

Unfortunately, lost to evolution is the crucial indication to the user that they’re even making a trust decision at all.

Eliminating Prompts

Making matters more dangerous, everyone hates security prompts that interrupt non-malicious scenarios. A common user request can be summarized as “Prompt me only when something bad is going to happen. In fact, in those cases, don’t even prompt me, just protect me.

Unfortunately, the browser cannot know whether a given App Protocol invocation is good or evil, so it delegates control of prompting in two ways:

In Internet Explorer and Edge (version<=18), the browser respects a per-protocol WarnOnOpen boolean in the registry, such that the App itself may tell the browser: “No worries, let anyone launch me, no prompt needed.

In Firefox, Chrome, and Edge (version >= 76), the browser instead allows the user to suppress further prompts with a “Always open these types of links” checkbox. (UX nit: This text is wrong and should instead say “Always open this type of link” or “Always open links of this type”)

If the user selects this option, the protocol will silently launch in the future without the browser first asking permission.

However, Edge/Chrome version 77.0.3864 removed the “Always open these types of links in the associated app” checkbox. The stated reason is found in Chrome issue #982341:

No obvious way to undo “Always open these types of links” decision for External Protocols.

We realized in a conversation around issue 951540 that we don’t have settings UI
that allows users to reconsider decisions they’ve made around external protocol
support. Until we work that out, and make longer-term decisions about the
permission model around the feature generally, we should stop making the problem
worse by removing that checkbox from the UI.

A user who had ticked the “Always open” box has no way to later review that decision6, and no obvious way to reverse it. Almost no one figured out that using the “Clear Browsing Data > Cookies and other site data” dialog box option directs Chrome to delete all previous “Always open” decisions for the user’s profile. 

Particularly confusing is that the “Always open” decision wasn’t made on a per-site basis– it applies to every site visited by the user in that browser profile.

UpdateAn Enterprise policy for v79+ allows administrators to restore the checkbox.

Future Opportunity: Much of the risk inherent in open-without-prompting behavior comes from the site that any random site (http://evil.example.com) can abuse ambient permission to launch the protocol handler. If browsers changed the option to “Always allow this site to open this protocol”, the risk would be significantly reduced, and a user could reasonably safely allow, e.g. https://teams.microsoft.com to open the msteams protocol without a prompt.

Alternatively, perhaps the Registry-based provisioning of a protocol handler should explicitly list the sites allowed to launch the protocol, akin to the SiteLock protection for legacy ActiveX controls.

For some schemes7 , Chrome will not even show a prompt because the protocol is included on a built-in allow or deny list.

Some security folks have argued that browsers should not provide any mechanism for skipping the permission prompt. Unfortunately, there’s evidence to suggest that such a firm stance might result in vendors avoiding the prompt by choosing even riskier architectures for Web-to-App communication. More on this in a future post.

Zero-Day Defense

Even when a zero day vulnerability in an App Protocol handler is getting exploited in the wild (e.g. this one), browsers have few defenses available to protect users. 

Unlike, say, file downloads, where the browser has multiple mechanisms to protect users against newly-discovered threats (e.g. file type policies and SmartScreen/SafeBrowsing), browsers do not presently have rapid update mechanisms that can block access to known vulnerable App Protocol handlers.

Future Opportunity: Use SafeBrowsing/SmartScreen or a file-type-policies style Component Update to supply the client with a list of known-vulnerable protocol handlers. If a page attempts to invoke such a protocol, either block it entirely or strongly caution the user.

To improve the experience even further, the blocklist could contain version information such that blocking/additional warnings would only be shown if the version of the handler app is earlier than the version number of the app containing the fix. 

Antivirus programs typically do monitor all calls to ShellExecute and could conceivably protect against malicious invocation of app protocol handlers, but I am not aware of any having ever done so.

Privacy Concerns Prevent Protocol Detection

One of the most common challenges for developers who want to use App Protocols for Web-to-App communication is that the web platform does not expose the list of available protocol handlers to JavaScript. This is primarily a privacy consideration: exposing the list of protocol handlers to the web would expose a significant amount of fingerprintable entropy and might even reveal things about the user’s interests and beliefs (e.g. a ConservativeNews App or a LGBTQ App might expose a protocol handler for app-to-app communication).

Internet Explorer and Edge <= 18 supply a non-standard JavaScript function msLaunchUri that allows a web page to detect that a user didn’t have a protocol handler installed, but this function is not available in other browsers.

UX When a Protocol Isn’t Installed

Browser behavior varies if the user attempts to invoke a link with a scheme for which no protocol handler is registered.

Firefox shows an error page: FirefoxNotInstalled

On Windows 8 and later, IE and Edge<=18 show a prompt that offers to take the user to the Microsoft store to search for a protocol handler for the target scheme:

Win10NotInstalled

Unfortunately, this search is rarely fruitful because most apps are not available in the Microsoft Store.

Interestingly, Chrome and Edge76+ show nothing at all when attempting to invoke a link for which no protocol handler is installed. Surprisingly, there’s no notice even in the Developer Tools console; a particularly thorough debugger will only see a “(canceled)” request in the DevTool’s Network tab.

Upcoming change – Require HTTPS to Invoke

Chrome is looking at requiring that a page be served over HTTPS in order for it to invoke an application protocol.

 

In future posts, I’ll explore some other alternatives for Web-to-App communication.

-Eric


Notes

1 In some browsers, it’s possible to register web-based handlers for “AppProtocols” (e.g. maps: and mailto: might go to Google Maps and GMail respectively), but this is relatively uncommon.

2 Within Chromium, App Protocols are called “External Protocols.”

3 There are other ways to handle protocols, including COM and the Windows 10 App Model’s URI Activation mechanism but these are uncommon.

4 As an anti-abuse mechanism, the browser may require a user-gesture (e.g. a mouse click) before attempting to launch an App Protocol, and may throttle invocations to avoid spamming the user with an infinite stream of prompts.

5 Chrome’s prompt used to look much like IE’s.

6 Short of opening the Preferences for the profile in Notepad or another text editor. E.g. after choosing “Always open” for Microsoft Teams and Skype for Business, the JSON file %localappdata%\Microsoft\Edge SxS\user Data\default\preferences contains

“protocol_handler”:{“excluded_schemes”:{“msteams”:false, “ms-sfb”: false}}

To see the list in IE/Edge<=18, you can run a registry query to find protocols with WarnOnOpen set to 0:

reg query "HKCU\SOFTWARE\Microsoft\Internet Explorer\ProtocolExecute" /s
reg query "HKLM\SOFTWARE\Microsoft\Internet Explorer\ProtocolExecute" /s


7 Hardcoded schemes:

kDeniedSchemes[] = {“afp”,”data”,”disk”,”disks”,”file”,”hcp”,”ie.http”,”javascript”,”ms-help”,”nntp”,”res”,”shell”,”vbscript”,”view-source”,”vnd.ms.radio”}
kAllowedSchemes[] = {“mailto”, “news”, “snews”};

The EscapeExternalHandlerValue function:

// Escapes characters in text suitable for use as an external protocol handler command. // We %XX everything except alphanumerics and -_.!~*'() and the restricted // characters (;/?:@&=+$,#[]) and a valid percent escape sequence (%XX). EscapeExternalHandlerValue()

 

Many websites offer a “Log in” capability where they don’t manage the user’s account; instead, they offer visitors the ability to “Login with <identity provider>.”

When the user clicks the Login button on the original relying party (RP) website, they are navigated to a login page at the identity provider (IP) (e.g. login.microsoft.com) and then redirected back to the RP. That original site then gets some amount of the user’s identity info (e.g. their Name & a unique identifier) but it never sees the user’s password.

Such Federated Identity schemes have benefits for both the user and the RP site– the user doesn’t need to set up yet another password and the site doesn’t have to worry about the complexity of safely storing the user’s password, managing forgotten passwords, etc.

In some cases, the federated identity login process (typically implemented as a JavaScript library) relies on navigating the user to a top-level page to log in, then back to the relying party website into which the library injects an IFRAME1 back to the identity provider’s website.

FederatedID

The authentication library in the RP top-level page communicates with the IP subframe (using postMessage or the like) to get the logged-in user’s identity information, API tokens, etc.

In theory, everything works great. The IP subframe in the RP page knows who the user is (by looking at its own cookies or HTML5 localStorage or indexedDB data) and can release to the RP caller whatever identity information is appropriate.

Crucially, however, notice that this login flow is entirely dependent upon the assumption that the IP subframe is accessing the same set of cookies, HTML5 storage, and/or indexedDB data as the top-level IP page. If the IP subframe doesn’t have access to the same storage, then it won’t recognize the user as logged in.

Unfortunately, this assumption has been problematic for many years, and it’s becoming even more dangerous over time as browsers ramp up their security and privacy features.

The root of the problem is that the IP subframe is considered a third-party resource, because it comes from a different domain (identity.example) than the page (news.example) into which it is embedded.

For privacy and security reasons, browsers might treat third-party resources differently than first-party resources. Examples include:

  1. The Block 3rd Party cookies option in most browsers
  2. The SameSite Cookie attribute
  3. P3P cookie blocking in Internet Explorer2
  4. Zone Partitioning in Internet Explorer and Edge Spartan3
  5. Safari’s Intelligent Tracking Protection
  6. Firefox Content Blocking
  7. Microsoft Edge Tracking Prevention

When a browser restricts access to storage for a 3rd party context, our theoretically simple login process falls apart. The IP subframe on the relying party doesn’t see the user’s login information because it is loaded in a 3rd party context. The authentication library is likely to conclude that the user is not logged in, and redirect them back to the login page. A frustrating and baffling infinite loop may result as the user is bounced between the RP and IP.

The worst part of all of this is that a site’s login process might usually work, but fail depending on the user’s browser choice, browser configuration, browser patch level, security zone assignments, or security/privacy extensions. As a result, a site owner might not even notice that some fraction of their users are unable to log in.

So, what’s a web developer to do?

The first task is awareness: Understand how your federated login library works — is it using cookies? Does it use subframes? Is the IP site likely to be considered a “Tracker” by popular privacy lists?

The second task is to build designs that are more resilient to 3rd-party storage restrictions:

  • Be sure to convey the expected state from the Identity Provider’s login page back to the Relying Party. E.g. if your site automatically redirects from news.example to identity.example/login back to news.example/?loggedin=1, the RP page should take note of that URL parameter. If the authentication library still reports “Not signed in”, avoid an infinite loop and do not redirect back to the Identity Provider automatically.
  • Authentication libraries should consider conveying identity information back to the RP directly, which will then save that information in a first party context.For instance, the IP could send the identity data to the RP via a HTTP POST, and the RP could then store that data using its own first party cookies.
  • For browsers that support it, the Storage Access API may be used to allow access to storage that would otherwise be unavailable in a 3rd-party context. Note that this API might require action on the part of the user (e.g. a frame click and a permission prompt).

The final task is verification: Ensure that you’re testing your site in modern browsers, with and without the privacy settings ratcheted up.

-Eric

[1] The call back to the IP might not use an IFRAME; it could also use a SCRIPT tag to retrieve JSONP, or issue a fetch/XHR call, etc. The basic principles are the same.
[2] P3P was removed from IE11 on Windows 10.
[3] In Windows 10 RS2, Edge 15 “Spartan” started sharing cookies across Security Zones, but HTML5 Storage and indexedDB remain partitioned.

InPrivate Mode was introduced in Internet Explorer 8 with the goal of helping users improve their privacy against both local and remote threats. Safari introduced a privacy mode in 2005.

All leading browsers offer a “Private Mode” and they all behave in the same general ways.

HTTP Caching

While in Private mode, browsers typically ignore any previously cached resources and cookies. Similarly, the Private mode browser does not preserve any cached resources beyond the end of the browser session. These features help prevent a revisited website from trivially identifying a returning user (e.g. if the user’s identity were cached in a cookie or JSON file on the client) and help prevent “traces” that might be seen by a later user of the device.

In Firefox’s and Chrome’s Private modes, a memory-backed cache container is used for the HTTP cache, and its memory is simply freed when the browser session ends. Unfortunately, WinINET never implemented a memory cache, so in Internet Explorer InPrivate sessions, data is cached in a special WinINET cache partition on disk which is “cleaned up” when the InPrivate session ends.

Because this cleanup process may be unreliable, in 2017, Edge made a change to simply disable the cache while running InPrivate, a design decision with significant impact on the browser’s network utilization and performance. For instance, consider the scenario of loading an image gallery that shows one large picture per page and clicking “Next” ten times:

InPrivateVsRegular

Because the gallery reuses some CSS, JavaScript, and images across pages, disabling the HTTP cache means that these resources must be re-downloaded on every navigation, resulting in 50 additional requests and a 118% increase in bytes downloaded for those eleven pages. Sites that reuse even more resources across pages will be more significantly impacted.

Another interesting quirk of Edge’s InPrivate implementation is that the browser will not download FavIcons while InPrivate. Surprisingly (and likely accidentally), the suppression of FavIcon downloads also occurs in any non-InPrivate windows so long as any InPrivate window is open on the system.

Web Platform Storage

Akin to the HTTP caching and cookie behaviors, browsers running in Private mode must restrict access to HTTP storage (e.g. HTML5 localStorage, ServiceWorker/CacheAPI, IndexedDB) to help prevent association/identification of the user and to avoid leaving traces behind locally. In some browsers and scenarios, storage mechanisms are simply set to an “ephemeral partition” while in others the DOM APIs providing access to storage are simply configured to return “Access Denied” errors.

You can explore the behavior of various storage mechanisms by loading this test page in Private mode and comparing to the behavior in non-Private mode.

Within IE and Edge’s InPrivate mode, localStorage uses an in-memory store that behaves exactly like the sessionStorage feature. This means that InPrivate’s storage is (incorrectly) not shared between tabs, even tabs in the same browser instance.

Network Features

Beyond the typical Web Storage scenarios, browser’s Private Modes should also undertake efforts to prevent association of users’ Private instance traffic with non-Private instance traffic. Impacted features here include anything that has a component that behaves “like a cookie” including TLS Session Tickets, TLS Resumption, HSTS directives, TCP Fast Open, Token Binding, ChannelID, and the like.

Automatic Authentication

In Private mode, a browser’s AutoComplete features should be set to manual-fill mode to prevent a “NameTag” vulnerability, whereby a site can simply read an auto-filled username field to identify a returning user.

On Windows, most browsers support silent and automatic authentication using the current user’s Windows login credentials and either the NTLM and Kerberos schemes. Typically, browsers are only willing to automatically authenticate to sites on “the Intranet“. Some browsers behave differently when in Private mode, preventing silent authentication and forcing the user to manually enter or confirm an authentication request.

In Firefox Private Mode and Edge InPrivate, the browser will not automatically respond to a HTTP/401 challenge for Negotiate/NTLM credentials.

In Chrome Incognito, Brave Incognito, and IE InPrivate, the browser will automatically respond to a HTTP/401 challenge for Negotiate/NTLM credentials even in Private mode.

Notes:

  • In Edge, the security manager returns MustPrompt when queried for URLACTION_CREDENTIALS_USE.
  • Unfortunately Edge’s Kiosk mode runs InPrivate, meaning you cannot easily use Kiosk mode to implement a display that projects a dashboard or other authenticated data on your Intranet.
  • For Firefox to support automatic authentication at all, the
    network.negotiate-auth.allow-non-fqdn and/or network.automatic-ntlm-auth.allow-non-fqdn preferences must be adjusted.

Detection of Privacy Modes

While browsers generally do not try to advertise to websites that they are running inside Private modes, it is relatively easy for a website to feature-detect this mode and behave differently. For instance, some websites like the Boston Globe block visitors in Private Mode (forcing login) because they want to avoid circumvention of their “Non-logged-in users may only view three free articles per month” paywall logic.

Sites can detect privacy modes by looking for the behavioral changes that signal that a given browser is running in Private mode; for instance, indexedDB is disabled in Edge while InPrivate. Detectors have been built for each browser and wrapped in simple JavaScript libraries. Defeating Private mode detectors requires significant investment on the part of browsers (e.g. “implement an ephemeral mode for indexedDB”) and fixes lagged until mainstream news sites (e.g. Boston Globe, New York Times) began using these detectors more broadly.

See also:

Advanced Private Modes

Generally, mainstream browsers have taken a middle ground in their privacy features, trading off some performance and some convenience for improved privacy. Users who are very concerned about maintaining privacy from a wider variety of threat actors need to take additional steps, like running their browser in a discardable Virtual Machine behind an anonymizing VPN/Proxy service, disabling JavaScript entirely, etc.

The Brave Browser offers a “Private Window with Tor” feature that routes traffic over the Tor anonymizing network; for many users this might be a more practical choice than the highly privacy-preserving Tor Browser Bundle, which offers additional options like built-in NoScript support to help protect privacy.

-Eric

Update: The October 2018 Cumulative Security Update (KB4462919) brings the RS5 Cookie Control changes described below to Windows 10 RS2, RS3, and RS4.

Cookies are one of the most crucial features in the web platform, and large swaths of the web don’t work properly without them. Unfortunately, cookies are also one of the primary mechanisms that trackers and ad networks utilize to follow users around the web, potentially impacting users’ privacy. To that end, browsers have offered cookie controls for over twenty years.

Back in 2010, I wrote a summary of Internet Explorer’s Cookie Controls. IE’s cookie controls were very granular and quite powerful. The basic settings were augmented with P3P, a once-promising feature that allowed sites to advertise their privacy practices and browsers to automatically enforce users’ preferences against cookies. Unfortunately, major sites created fraudulent P3P statements, regulators failed to act, and the entire (complicated) system collapsed. P3P was removed from IE11 on Windows 10 and never implemented in Microsoft Edge.

Instead, Edge offers a very simple cookie control in the Privacy and Security section of the settings. Under the Cookies option, you have three choices: Don’t block cookies (the default), Block all cookies, and Block only third party cookies:

CookieSetting

This simple setting hides a bunch of subtlety that this post will explore.

Cookie => Cookie-Like

For the October 2018 update (aka “Redstone Five” aka “RS5”) we’ve made some important changes to Edge’s Cookie control.

The biggest of the changes is that Edge now matches other browsers, and uses the cookie controls to restrict cookie-like storage mechanisms, including localStoragesessionStorageindexedDB, Cache API, and ServiceWorkers. Each of these features can behave much like a cookie, with a similar potential impact on users’ privacy.

While we didn’t change the UI, it would be accurate to change it to:

CookieLike

This change improves privacy and can even improve site compatibility. During our testing, we were surprised to discover that some website flows fail if the browser blocks only 3rd party cookies without also blocking 3rd-party localStorage. This change brings Edge in line with other browsers with minor exceptions. For example, in Firefox 62, when 3rd-party site data is blocked, sessionStorage is still permitted in a 3rd-party context. In Edge RS5 and Chrome, 3rd party sessionStorage is blocked if the user blocks 3rd-party cookies.

Block Setting and Sending

Another subtlety exists because of the ambiguous terminology “third-party cookie.” A cookie is just a cookie– it belongs to a site (eTLD+1). Where the “party” comes into play is the context where the cookie was set and when it is sent.

In the web platform, unless a browser implements restrictions:

  • A cookie set in a first-party context will be sent to a first-party context
  • A cookie set in a first-party context will be sent to a third-party context
  • A cookie set in a third-party context will be sent to a first party context
  • A cookie set in a third-party context will be sent to a third-party context

For instance, in this sample page, if the IFRAME and IMG both set a cookie, these cookies are set in a third-party context:Contexts

  • If the user subsequently visits domain2.com, the cookie set by that 3rd-Party IFRAME will now be sent to the domain2.com server in a 1st-Party context.
  • If the user subsequently visits domain3.com, the cookie set by that 3rd-Party IMG will now be sent to the domain3.com server in a 1st-Party context.

Historically, Edge and IE’s “Block 3rd party cookies” options controlled only whether a cookie could be set from a 3rd party context, but did not impact whether a cookie initially set in a 1st party context would be sent to a 3rd party context.

As of Edge RS5, setting “Block only 3rd party cookies” will now also block cookies that were set in a 1st party context from being sent in a 3rd-party context. This change is in line with the behavior of other browsers.

Edge Controls Impacted By Zones

With the move from Internet Explorer to Edge, the Windows Security Zones architecture was largely left by the wayside.

Zones

However, cookie controls are one of a small number of exceptions to this; Edge applies the cookie restrictions only in the Internet Zone, the zone almost all sites fall into (outside of users on corporate networks).

Perhaps surprisingly, cookie-like features and the document.cookie getter are restricted, even in the Intranet and Trusted zones.

Chrome and Firefox do not take Windows Security Zones into account when applying cookie policies.

Test Cases

I’ve updated my old “Cookies” test page with new storage test cases. You can set your browser’s privacy controls:

Block3rdPartyChrome

Block3rdPartyFF

…then visit the test page to see how the browser limits features from 3rd-party contexts. You can use the Swap button on the page to swap 1st-party and 3rd-party contexts to see how restrictions have been applied. You should see that the latest versions of Chrome, Firefox, and Edge all behave pretty much the same way.

One interesting exception is that when configured to Block 3rd-party Cookies, Edge still allows 3rd-party contexts to delete their own cookies. (This is used by federated logout pages, for instance). Chrome does not allow deletion in this scenario– the attempt to delete cookies is ignored.

 

-Eric


Appendix: Chromium Audit

In the course of our site-compatibility investigations, I had a look at Chromium’s behavior with regard to their cookie controls. In Chromium, Blink asks the host application for permission to use various storages, and these chokepoints check:

cookie_settings_->IsCookieAccessAllowed(origin_url, top_origin_url);

…which is sensitive to the various “Block Cookies” settings.

Mojo messages come up through renderer_host/chrome_render_message_filter.cc, gating access to

Additionally, ChromeContentBrowserClient gates

Elsewhere, IsCookieAccessAllowed is used to limit:

  • Flash Storage (PP_FLASHLSORESTRICTIONS_BLOCK)
  • Client Hints

Of these, Edge does not support WebSQL, FileSystem, SharedWorker, or Client Hints.