Page 2 of 25

This is an introduction/summary post which will link to individual articles about browser mechanisms for communicating directly between web content and native apps on the local computer.

This series aims to provide, for each mechanism, information about:

  • On which platforms is it available?
  • Can the site detect that the app/mechanism is available?
  • Can the site send more than one message to the application without invoking the mechanism again, or is it fire-and-forget?
  • Can the application bidirectionally communicate back to the web content via the same mechanism?
  • What are the security implications?
  • What is the UX?

Application Protocols

Blog Post

tl;dr: Apps can register protocol schemes. Browsers will spawn the apps when navigating to the scheme.

Characteristics: Fire-and-Forget. Non-detectable. Supported across all browsers for decades. Prompts by default, but can be disabled.

Native Messaging via Extensions

Blog Post – Coming someday. For now, see nativeMessaging API.

tl;dr: Browser extensions can communicate with a local native app using stdin/stdout passing JSON between the app and the extension. The extension may pass information to/from web content if desired.

Characteristics: Bi-directional communications. Detectable. Supported across most modern browsers; not legacy IE. Dunno about Safari. Prompts on install, but not required to use.

File downloads (Traditional)

Blog Post – Coming someday.

tl;dr: Apps can register to handle certain file types. User may spawn the app to open the file.

Characteristics: Fire-and-Forget. Non-detectable. Supported across all browsers. Prompts for most file types, but some browsers allow disabling.

File downloads (DirectInvoke)

Blog Post

Internet Explorer/Edge support DirectInvoke, a scheme whereby a file handler application is launched with a URL instead of a local file.

Characteristics: Fire-and-Forget. Non-detectable. Windows only. Supported in Internet Explorer, Edge 18 and below, and Edge 78 and above. Degrades gracefully into a traditional file download.

Local Web Server

Blog Post – Coming someday.

tl;dr: Apps can run a HTTP(S) server on localhost and webpages can communicate with that server using fetch/XHR/WebSocket/WebRTC etc.

Characteristics: Bi-directional communications. Detectable. Supported across all browsers. Not available on mobile. Complexities around Secure Contexts / HTTPS, and loopback network protections in Edge18/IE AppContainers.

HTTPS

In many cases, HTTPS pages may not send requests to HTTP URLs (depending on whether the browser supports the new “SecureContexts” feature that allows HTTP://LOCALHOST), in some cases applications wish to get a HTTPS certificate for their local servers. This is complex and error-prone. There’s a writeup of how Plex got HTTPS certificates for their local servers.

Notes: https://wicg.github.io/cors-rfc1918/#mixed-content

A nice writeup of Amazon Music’s web exposure can be found here: https://medium.com/0xcc/what-the-heck-is-tcp-port-18800-a16899f0f48f

Andrew (@drewml) tweeted at 4:23 PM on Tue, Jul 09, 2019:
The @zoom_us vuln sucks, but it’s definitely not new. This was/is a common approach used to sidestep the NPAPI deprecation in Chrome. Seems like a @taviso favorite:
Anti virus, logitech, utorrent. (https://twitter.com/drewml/status/1148704362811801602?s=03)

Bypass of localhost CORS protections by utilizing GET request for an Image
https://bugs.chromium.org/p/chromium/issues/detail?id=951540#c28

View at Medium.com

WebRTC tricks to bypass HTTPS requirements https://twitter.com/sleevi_/status/1177248901990105090?s=20

Variant: Common Remote Server as a Broker

An alternative approach would be to communicate indirectly. For instance, a web application and a client application using HTTPS/WebSockets could each individually communicate to a common server which brokers messages between them.

 

AppLinks in Edge/Windows

Allow navigation to certain namespaces (domains) to be handed off to a native application on the local device.

https://blogs.windows.com/windowsdeveloper/2016/10/14/web-to-app-linking-with-appurihandlers/

https://docs.microsoft.com/en-us/windows/uwp/launch-resume/web-to-app-linking

Legacy Plugins/ActiveX architecture

Please no!

Characteristics: Bi-directional communications. Detectable. Support has been mostly removed from most browsers. Generally not available on mobile. One of the biggest sources of security risk in web platform history.

Android Intents

Dunno much about these.

Android Instant Apps

Dunno much about these. Basically, the idea is that navigating to a website can install/run an Android Application.

 

When I launched Chrome on Thursday, I saw something unexpected:

SSLKeyLogfile

While most users probably would have no idea what to make of this, I happened to know what it means– Chrome is warning me that the system configuration has instructed it to leak the secret keys it uses to encrypt and decrypt HTTPS traffic to a stream on the local computer.

Looking at the Chrome source code, this warning was newly added last week. More surprising was that I couldn’t find the SSLKeyLogFile setting anywhere on my system. Opening a new console showed that it wasn’t set:

C:\WINDOWS\system32>set sslkeylogfile
Environment variable sslkeylogfile not defined

…and opening the System Properties > Advanced > Environment Variables UI showed that it wasn’t set for either my user account or the system at large. Weird.

Fortunately, I understood from past investigations that a process can have different environment variables than the rest of the system, and Process Explorer can show the environment variables inside a running process. Opening Chrome.exe, we see that it indeed has an SSLKEYLOGFILE set:

SSLKeyLogfileEB

The unusual syntax with the leading \\.\ means that this isn’t a typical local file path but instead a named pipe, which means that it doesn’t point to a file on disk (e.g. C:\temp\sslkeys.txt) but instead to memory that another process can see.

My machine was in this state because earlier that morning, I’d installed Avast Antivirus to attempt to reproduce a bug a Chrome user encountered. Avast is injecting the SSLKEYLOGFILE setting so that it can conduct a monster-in-the-browser attack (MITB) and see the encrypted traffic going into Chrome.

Makers of antivirus products know that browsers are one of the primary vectors by which attackers compromise PCs, and as a consequence their security products often conduct MITB attacks in order to scan web content. Antivirus developers have two common techniques to scan content running in the browser:

  1. Code injection
  2. Network interception

Code Injection

The code injection technique relies upon injecting security code into the browser process. The problem with this approach is that native code injections are inherently fragile– any update to the browser might move its functions and data structures around such that the security code will fail and crash the process. Browsers discourage native code injection, and the bug I was looking at was related to a new feature, RendererCodeIntegrity, that directs the Windows kernel to block loading of any code not signed by Microsoft or Google into the browser’s renderer processes.

An alternative code-injection approach relies upon using a browser extension that operates within the APIs exposed by the browser– this approach is more stable, but can address fewer threats.

Even well-written code injections that don’t cause stability problems can cause significant performance regressions for browsers– when I last looked at the state of the industry, performance costs for top AV products ranged from 20% to 400% in browser scenarios.

Network Interception

The Network interception technique relies upon scanning the HTTP and HTTPS traffic that goes into the browser process. Scanning HTTP traffic is straightforward (a simple proxy server can do it), but scanning HTTPS traffic is harder because the whole point of HTTPS is to make it impossible for a network intermediary to view or modify the plaintext network traffic.

Historically, the most common mechanism for security-scanning HTTPS traffic was to use a monster-in-the-middle (MITM) proxy server running on the local computer. The MITM would instruct Windows to trust a self-signed root certificate, and it would automatically generate new interception certificates for every secure site you visit. I spent over a decade working on such a MITM proxy server, the Fiddler Web Debugger.

There are many problems with using a MITM proxy, however. The primary problem is that it’s very very hard to ensure that it behaves exactly as the browser does and that it does not introduce security vulnerabilities. For instance, if the MITM’s certificate verification logic has bugs, then it might accept a bogus certificate from a spoof server and the user would not be warned– Avast used to use a MITM proxy and had exactly this bug; they were not alone. Similarly, the MITM might not support the most secure versions of protocols supported by the browser and server (e.g. TLS/1.3) and thus using the MITM would degrade security. Some protocol features (e.g. Client Certificates) are incompatible with MITM proxies. And lastly, some security features (specifically certificate pinning) are fundamentally incompatible with MITM certificates and are disabled when MITM certificates are used.

Given the shortcomings of using a MITM proxy, it appears that Avast has moved on to a newer technique, using the SSLKeyLogFile to leak the secret keys HTTPS negotiates on each connection to encrypt the traffic. Firefox and Chromium support this feature, and it enables decryption of TLS traffic without using the MITM certificate generation technique. While browser vendors are wary of any sort of interception of HTTPS traffic, this approach is generally preferable to MITM proxies.

There’s some worry that Chrome’s new notification bar might drive security vendors back to using more dangerous techniques, so this notification might not make its way into the stable release of Chrome.

When it comes to browser architecture, tradeoffs abound.

-Eric

PS: I’m told that Avast may be monetizing the data they’re decrypting.

Appendix: Peeking at the Keys

If we point the SSLKeyLog setting at a regular file instead of a named pipe:

chrome --ssl-key-log-file=C:\temp\sslkeys.txt

…we can examine the file’s contents as we browse to reveal the encryption keys:

ExportedKeys

This file alone isn’t very readable for a human (even if you read Mozilla’s helpful file format documentation), but you can configure tools like Wireshark to make use of it and automatically decrypt captured TLS traffic back to plaintext.

In my last post, I showed you how to use OmahaProxy’s Find Releases tool to discover which versions of Chrome contain a given bugfix.

I noted that if you’re using Microsoft’s new Chromium-based Edge, you can look at the edge://version page or this extension to see the upstream Chrome version upon which Edge is based:

ShowChromeVersion

Until today, I never realized that this version number can be off-by-one. I noticed this discrepancy after complaints about a nasty bug in Edge that caused a bunch of webpage renderers to crash. The fix, we learned from OmahaProxy, landed in version Chrome Canary 78.0.3872.0.

When Edge next updated, I was alarmed to see that the crash was still present, but the user-agent header contains Chrome/78.0.3872.0, the version that’s supposed to have the fix.

StillCrashing.png

What’s going on?

Chromium’s src/chrome/VERSION file gets updated shortly after the prior Chrome Canary ships, as you can see in this timeline:

  1. At 03:13 on Aug 1, /src/chrome/VERSION was updated to 3872.
  2. Around 04:00 on Aug 1, Edge’s code pump ingested all of the commits from upstream Chromium into our internal integration branch, preparing for our 78.0.242 Canary build.
  3. At 11:20 on Aug 1, the fix for the crash landed.
  4. At 23:16 on Aug 1, the last commit from Chromium Master that went into Chrome Canary 78.0.3872 landed.
    Within a few hours, the Canary branch was declared an Official release.
  5. At 03:13 on Aug 2, /src/chrome/VERSION was updated to 3873.

Because Edge’s code pump pulled between when the version number was bumped to 3872 [1] and when the crasher’s fix landed [3], Edge ended up with a build that has the 3872 version number, but without the fix that went into Chrome Canary 78.0.3872.

So, for the moment at least, the safe way to be sure that a given Edge build has a given Chromium fix is to ensure that the Chrome token in the user-agent string is later than the Chrome Canary build number in which the patch first shipped.

Chrome 3889 includes some fixes and bugs not present in Edge 3889

-Eric

Yesterday, we covered the mechanisms that modern browsers can use to rapidly update their release channels. Today, let’s look at how to figure out when an eagerly awaited fix will become available in the Canary channels.

By way of example, consider crbug.com/977805, a nasty beast that caused some extensions to randomly be disabled and marked corrupt:

corruption

By bisecting the builds (topic of a future post) to find where the regression was introduced, we discovered that the problem was the result of a commit with hash fa8cdc81f5 that landed back on May 20th. This (probably security) change exposed an earlier bug in Chromium’s extension verification system such that an aborted request for a resource in an extension (say, because a page getting torn down just as a content script was getting injected) resulted in the verification logic thinking that the extension’s resource file was corrupted on disk.

On July 12th, the area owner landed a fix with the commit hash of cad2f6468. But how do I know whether my browser has this fix already? In what version(s) did the fix get released?

To answer these questions, we turn back to our trusted OmahaProxy. In the Find Releases box at the bottom, paste the full or partial hash value into the box and hit the Find Releases button:

CommitHashFix

The system will churn for a bit and then return the following page:

CommitHashLanded

So, now we know two things: 1) The fix will be in Chromium-based browsers with version numbers later than 77.0.3852.0, and 2) So far, the fix only landed there and hasn’t been merged elsewhere.

Does it need to be merged? Let’s figure out where the original regression was landed using the same tool with the regressing change list’s hash:

regressregress

We see that the regression originally landed in Master before the Chrome 76 branch point, so the bug is in Chrome 76.0.3801 and later. That means that after the fix is verified, we’ll need to request that it be merged from Master where it landed, over to the 76 branch where it’s also needed.

We can see what that’ll look like by looking at the fix for crbug.com/980803. This regression in the layout engine was fixed by a1dd95e43b5 in 77, but needed to be put into Chromium 76 as well. So, it was, and the result is shown as:Merged

Note: It’s possible for a merge to be performed but not show up here. The tool looks for a particular string in the merge’s commit message, and some developers accidentally remove or alter it.

Finally, if you’re really champing at the bit for a fix, you might run Find Releases on a commit hash and see

notyetin

Assuming you didn’t mistype the hash, what this means is that the fix isn’t yet in the Canary channel. If you were to clone the Chromium master @HEAD and build it yourself, you’d see the fix, but it’s not yet in a public Canary. In almost all cases, you’ll need to wait until the next morning (Pacific time) to get an official channel build with the fix.

Now, so far we’ve mostly focused on Chrome, but what about other Chromium-based browsers?

Things are mostly the same, with the caveat that most other Chromium-based browsers are usually days to weeks to (gulp) months behind Chrome Canary. Is the extensions bug yet fixed in my Edge Canary?

The simplest (and generally reliable) way to check is to just look at the Chrome token in the browser’s user agent string by visiting edge://version or using my handy Show Chrome Version browser extension. As you can see in both places, Edge 77.0.220.0 Canary is based on Chromium 77.0.3843, a bit behind the 77.0.3852 version containing the extensions verification fix:

ShowChromeVersion

So, I’ll probably have to wait a few days to get this fix into my browser.

Warning: The “Chrome” token shown in Edge might be off-by-one. See my followup post for details.

Also, note that it’s possible for Microsoft and other Chromium embedders to “cherry-pick” critical fixes into our builds before our merge pump naturally pulls them down from upstream, but this is a relatively rare occurrence for Edge Canary. 

 

tl;dr: OmahaProxy is awesome!

-Eric

And now for something completely different…

Shortly after we moved into our house in late 2012, the control panel on our GE Oven (model #JTP30B0M1BB) started to fall apart. The faceplate of the control panel was made of a plastic that wasn’t sufficiently heat-resistant. The labeled plastic began to bubble, crack, and peel. By 2018, the plastic covering many of the buttons had fallen off entirely. It looked bad, to put it mildly, as you can see in this photo I took after I took the panel off:

IMG_20190303_161013On a whim, I searched around to see whether maybe I could buy a new overlay to stick atop the old panel.

I learned that, far from being alone, so many other people had had this problem that GE had completely redesigned the control panel. A thread on a forum led me to the magic direct phone number to the GE Parts department (866-622-6184). We called and after supplying the model number and serial number of our oven, a free replacement control panel was on its way to our house. The new panel took under an hour to install, using just a screwdriver and socket wrench. Basically, turn off the power, unscrew the old panel, unplug 7 connections from the old panel, and plug them into the new panel.

IMG_20190303_161437.jpg

The new panel looks great, and the modified ventilation and layout seem much less likely to encounter any problem like this in the future.

IMG_20190716_224532

Beyond being happy about the outcome, I’m gob-smacked about this support process. I never would have expected GE to send a free replacement panel, especially considering that the oven was over ten years old and originally purchased by another buyer. We didn’t have to supply anything other than the serial number, didn’t get stuck on hold for tens of minutes, and we didn’t even pay for shipping.

I’m approximately 25 times more likely to buy an appliance from GE in the future.

And, I’m grateful for the Internet, because there’s no chance that I would’ve ever discovered this fix for a longstanding annoyance if it wasn’t so easy to find a community of people with the same problem offering helpful steps to resolve it.

 

 

By this point, most browser enthusiasts know that Chrome has a rapid release cycle, releasing a new stable version of the browser approximately every six weeks. The Edge team intends to adopt that rapid release cadence for our new browser, and we’re already releasing new Edge Dev Channel builds every week.

What might be less obvious is that this six week cadence represents an upper-bound for how long it might take for an important change to make its way to the user.

Background: Staged Rollouts

Chrome uses a staged rollout plan, which means only a small percentage (1%-5%) of users get the new version immediately. If any high-priority problems are flagged by those initial users, the rollout can be paused while the team considers how to best fix the problem. That fix might involve shipping a new build, turning off a feature using the experimentation server, or dynamically updating a component.

Let’s look at each.

Respins

If a serious security or functionality problem is found in the Stable Channel, the development team generates a respin of the release, which is a new build of the browser with the specific issue patched. The major and minor version numbers of the browser stay the same. For instance, on July 15th, Chrome Stable version 75.0.3770.100 was updated to 75.0.3770.142. Users who had already installed the buggy version in the channel are updated automatically, and users who haven’t yet updated to the buggy version will just get the fixed version when the rollout reaches them.

If you’re curious, you can see exactly which versions of Chrome are being delivered from Google’s update servers for each Channel using OmahaProxy.

Field Trial Flags

In some cases, a problem is discovered in a new feature that the team is experimenting with. In these cases, it’s usually easy for the team to simply remotely disable or reconfigure the experiment as needed using the experimental flags. The browser client periodically polls the development team’s servers to get the latest experimental configuration settings. Chrome codenames their experimental system “Finch,” while Microsoft calls ours “CFR” (Controlled Feature Rollout).

You can see your browser’s current field trial configuration by navigating to

chrome://version/?show-variations-cmd

The hexadecimal Variations list is generally inscrutable, but the Command-line variations section later in the page is often more useful and allows you to better understand what trials are underway. You can even use this list to identify the exact trial causing a particular problem.

Regular readers might remember that I’ve previously written about Chrome’s Field Trials system.

Components

In other cases, a problem is found in a part of the browser implemented as a “Component.” Components are much like hidden, built-in extensions that can be silently and automatically updated by the Component Updater.

The primary benefit of components is that they can be updated without an update to Chrome itself, which allows them to have faster (or desynchronized) release cadences, lower bandwidth consumption, and avoids bloat in the (already sizable) Chrome installer. The primary drawback is that they require Chrome to tolerate their absence in a sane way.

To me, the coolest part of components is that not only can they update without downloading a new version of the browser, in some cases users don’t even need to restart their browser to begin using the updated version of a component. As soon as a new version is downloaded, it can “take over” from the prior version.

To see the list of components in the browser, visit

chrome://components

In my Chrome Canary instance, I see the following components:

Components

As you can see, many of these have rather obtuse names, but here’s a quick explanation where I know offhand:

  • MEI Preload – Policies for autoplay (see chrome://media-engagement/ )
  • Intervention Policy – Controls interventions used on misbehaving web pages
  • Third Party Module – Used to exempt accessibility and other components from the Code Integrity protections on the browser’s process that otherwise forbid injection of DLLs.
  • Subresource Filter Rules – The EasyList adblock database used by Chrome’s built-in adblocker to remove ads from a webpage when the Safe Browsing service indicates that a site violates the guidelines in the Better Ads Standard.
  • Certificate Error Assistant – Helps users understand and recover from certificate errors (e.g. when behind a known WiFi captive portal).
  • Software Reporter Tool – Collects data about system configuration / malware.
  • CRLSet – List of known-bad certificates (used to replace OCSP/CRL).
  • pnacl – Portable Native Client (overdue for removal)
  • Chrome Improved Recovery Unsure, but comments suggest this is related to helping fix broken Google Updater services, etc.
  • File Type Policies – Maps a list of file types to a set of policies concerning how they should be downloaded, what warnings should be presented, etc. See below.
  • Origin Trials – Used to allow websites to opt-in to experimenting with future web features on their sites. Explainer.
  • Adobe Flash Player – The world’s most popular plugin, gradually being phased out; slated for complete removal in late 2020.
  • Widevine Content DecryptionA DRM system that permits playback of protected video content.

If you’re using an older Chrome build, you might see:

If you’re using Edge, you might see:

If you’re using the Chromium-derived Brave browser, you’ll see that brave://components includes a bunch of extra components, including “Ad Blocker”, “Tor Client”, “PDF Viewer”, “HTTPS Everywhere”, and “Local Data Updater.”

If you’re using Chrome on Android, you might notice that it’s only using three components instead of thirteen; the missing components simply aren’t used (for various reasons) on the mobile platform. As noted in the developer documentation, “The primary drawback [to building a feature using a Component] is that [Components] require Chrome to tolerate their absence in a sane way.

Case Study: Fast Protection via Component Update

Let’s take a closer look at my favorite component, the File Type Policies component.

When the browser downloads a file, it must make a number of decisions for security reasons. In particular, it needs to know whether the file type is potentially harmful to the user’s device. If the filetype is innocuous (e.g. plaintext), then the file may be downloaded without any prompts. If the type is potentially dangerous, the user should be warned before the download completes, and security features like SafeBrowsing/SmartScreen should scan the completed download for malicious content.

In the past, this sort of “What File Types are Dangerous?” list was hardcoded into various products. If a file type were later found to be dangerous, patching these products with updated threat information required weeks to months.

In contrast, Chrome delivers this file type policy information using the File Type Policies component. The component lets Chrome engineers specify which types are dangerous, which types may be configured to automatically open, which types are archives that contain other files that may require scanning, and so on.

How does this work in the real world? Here’s an example.

Around a year ago, it was discovered that files with the .SettingsContent-ms file extension could be used to compromise the security of the user’s device. Browsers previously didn’t take any special care when such files were downloaded, and most users had no idea what the files were or what would happen if they were opened. Everyone was caught flat-footed.

In less than a day after this threat came to light, a Chrome engineer simply updated a single file to mark the settings-content.ms file type as potentially dangerous. The change was picked up by the component builder, and Chrome users across all versions and channels were protected as their browser automatically pulled down the updated component in the background.

 

Ever faster!

-Eric

Many websites offer a “Log in” capability where they don’t manage the user’s account; instead, they offer visitors the ability to “Login with <identity provider>.”

When the user clicks the Login button on the original relying party (RP) website, they are navigated to a login page at the identity provider (IP) (e.g. login.microsoft.com) and then redirected back to the RP. That original site then gets some amount of the user’s identity info (e.g. their Name & a unique identifier) but it never sees the user’s password.

Such Federated Identity schemes have benefits for both the user and the RP site– the user doesn’t need to set up yet another password and the site doesn’t have to worry about the complexity of safely storing the user’s password, managing forgotten passwords, etc.

In some cases, the federated identity login process (typically implemented as a JavaScript library) relies on navigating the user to a top-level page to log in, then back to the relying party website into which the library injects an IFRAME1 back to the identity provider’s website.

FederatedID

The authentication library in the RP top-level page communicates with the IP subframe (using postMessage or the like) to get the logged-in user’s identity information, API tokens, etc.

In theory, everything works great. The IP subframe in the RP page knows who the user is (by looking at its own cookies or HTML5 localStorage or indexedDB data) and can release to the RP caller whatever identity information is appropriate.

Crucially, however, notice that this login flow is entirely dependent upon the assumption that the IP subframe is accessing the same set of cookies, HTML5 storage, and/or indexedDB data as the top-level IP page. If the IP subframe doesn’t have access to the same storage, then it won’t recognize the user as logged in.

Unfortunately, this assumption has been problematic for many years, and it’s becoming even more dangerous over time as browsers ramp up their security and privacy features.

The root of the problem is that the IP subframe is considered a third-party resource, because it comes from a different domain (identity.example) than the page (news.example) into which it is embedded.

For privacy and security reasons, browsers might treat third-party resources differently than first-party resources. Examples include:

  1. The Block 3rd Party cookies option in most browsers
  2. The SameSite Cookie attribute
  3. P3P cookie blocking in Internet Explorer2
  4. Zone Partitioning in Internet Explorer and Edge Spartan3
  5. Safari’s Intelligent Tracking Protection
  6. Firefox Content Blocking
  7. Microsoft Edge Tracking Prevention

When a browser restricts access to storage for a 3rd party context, our theoretically simple login process falls apart. The IP subframe on the relying party doesn’t see the user’s login information because it is loaded in a 3rd party context. The authentication library is likely to conclude that the user is not logged in, and redirect them back to the login page. A frustrating and baffling infinite loop may result as the user is bounced between the RP and IP.

The worst part of all of this is that a site’s login process might usually work, but fail depending on the user’s browser choice, browser configuration, browser patch level, security zone assignments, or security/privacy extensions. As a result, a site owner might not even notice that some fraction of their users are unable to log in.

So, what’s a web developer to do?

The first task is awareness: Understand how your federated login library works — is it using cookies? Does it use subframes? Is the IP site likely to be considered a “Tracker” by popular privacy lists?

The second task is to build designs that are more resilient to 3rd-party storage restrictions:

  • Be sure to convey the expected state from the Identity Provider’s login page back to the Relying Party. E.g. if your site automatically redirects from news.example to identity.example/login back to news.example/?loggedin=1, the RP page should take note of that URL parameter. If the authentication library still reports “Not signed in”, avoid an infinite loop and do not redirect back to the Identity Provider automatically.
  • Authentication libraries should consider conveying identity information back to the RP directly, which will then save that information in a first party context.For instance, the IP could send the identity data to the RP via a HTTP POST, and the RP could then store that data using its own first party cookies.
  • For browsers that support it, the Storage Access API may be used to allow access to storage that would otherwise be unavailable in a 3rd-party context. Note that this API might require action on the part of the user (e.g. a frame click and a permission prompt).

The final task is verification: Ensure that you’re testing your site in modern browsers, with and without the privacy settings ratcheted up.

-Eric

[1] The call back to the IP might not use an IFRAME; it could also use a SCRIPT tag to retrieve JSONP, or issue a fetch/XHR call, etc. The basic principles are the same.
[2] P3P was removed from IE11 on Windows 10.
[3] In Windows 10 RS2, Edge 15 “Spartan” started sharing cookies across Security Zones, but HTML5 Storage and indexedDB remain partitioned.